00:00:00.000 Started by upstream project "autotest-per-patch" build number 132581 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:29.336 The recommended git tool is: git 00:00:29.336 using credential 00000000-0000-0000-0000-000000000002 00:00:29.338 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:29.348 Fetching changes from the remote Git repository 00:00:29.351 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:29.362 Using shallow fetch with depth 1 00:00:29.362 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:29.362 > git --version # timeout=10 00:00:29.374 > git --version # 'git version 2.39.2' 00:00:29.374 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.385 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.385 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:01:09.239 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:09.250 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:09.260 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:01:09.260 > git config core.sparsecheckout # timeout=10 00:01:09.269 > git read-tree -mu HEAD # timeout=10 00:01:09.281 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:01:09.300 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:01:09.300 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:01:09.368 [Pipeline] Start of Pipeline 00:01:09.383 [Pipeline] library 00:01:09.386 Loading library shm_lib@master 00:01:10.386 Library shm_lib@master is cached. Copying from home. 00:01:10.414 [Pipeline] node 00:01:10.469 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:01:10.470 [Pipeline] { 00:01:10.480 [Pipeline] catchError 00:01:10.481 [Pipeline] { 00:01:10.495 [Pipeline] wrap 00:01:10.505 [Pipeline] { 00:01:10.513 [Pipeline] stage 00:01:10.515 [Pipeline] { (Prologue) 00:01:10.529 [Pipeline] echo 00:01:10.531 Node: VM-host-SM0 00:01:10.538 [Pipeline] cleanWs 00:01:10.547 [WS-CLEANUP] Deleting project workspace... 00:01:10.547 [WS-CLEANUP] Deferred wipeout is used... 00:01:10.553 [WS-CLEANUP] done 00:01:10.779 [Pipeline] setCustomBuildProperty 00:01:10.833 [Pipeline] httpRequest 00:01:13.619 [Pipeline] echo 00:01:13.621 Sorcerer 10.211.164.101 is alive 00:01:13.631 [Pipeline] retry 00:01:13.632 [Pipeline] { 00:01:13.646 [Pipeline] httpRequest 00:01:13.651 HttpMethod: GET 00:01:13.651 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:13.652 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:13.657 Response Code: HTTP/1.1 200 OK 00:01:13.657 Success: Status code 200 is in the accepted range: 200,404 00:01:13.658 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:30.285 [Pipeline] } 00:01:30.303 [Pipeline] // retry 00:01:30.310 [Pipeline] sh 00:01:30.593 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:30.610 [Pipeline] httpRequest 00:01:31.029 [Pipeline] echo 00:01:31.031 Sorcerer 10.211.164.101 is alive 00:01:31.041 [Pipeline] retry 00:01:31.043 [Pipeline] { 00:01:31.057 [Pipeline] httpRequest 00:01:31.062 HttpMethod: GET 00:01:31.063 URL: http://10.211.164.101/packages/spdk_9094b9600534b48883cb49609e39502a4c8f4f30.tar.gz 00:01:31.063 Sending request to url: http://10.211.164.101/packages/spdk_9094b9600534b48883cb49609e39502a4c8f4f30.tar.gz 00:01:31.068 Response Code: HTTP/1.1 200 OK 00:01:31.069 Success: Status code 200 is in the accepted range: 200,404 00:01:31.069 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_9094b9600534b48883cb49609e39502a4c8f4f30.tar.gz 00:03:50.086 [Pipeline] } 00:03:50.104 [Pipeline] // retry 00:03:50.114 [Pipeline] sh 00:03:50.395 + tar --no-same-owner -xf spdk_9094b9600534b48883cb49609e39502a4c8f4f30.tar.gz 00:03:53.691 [Pipeline] sh 00:03:53.975 + git -C spdk log --oneline -n5 00:03:53.975 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:03:53.975 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:03:53.975 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:03:53.975 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:03:53.975 5592070b3 doc: update nvmf_tracing.md 00:03:53.996 [Pipeline] writeFile 00:03:54.009 [Pipeline] sh 00:03:54.287 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:54.300 [Pipeline] sh 00:03:54.586 + cat autorun-spdk.conf 00:03:54.586 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:54.586 SPDK_RUN_ASAN=1 00:03:54.586 SPDK_RUN_UBSAN=1 00:03:54.586 SPDK_TEST_RAID=1 00:03:54.586 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:54.593 RUN_NIGHTLY=0 00:03:54.595 [Pipeline] } 00:03:54.609 [Pipeline] // stage 00:03:54.626 [Pipeline] stage 00:03:54.629 [Pipeline] { (Run VM) 00:03:54.643 [Pipeline] sh 00:03:54.954 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:54.954 + echo 'Start stage prepare_nvme.sh' 00:03:54.954 Start stage prepare_nvme.sh 00:03:54.954 + [[ -n 0 ]] 00:03:54.954 + disk_prefix=ex0 00:03:54.954 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:03:54.954 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:03:54.954 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:03:54.954 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:54.954 ++ SPDK_RUN_ASAN=1 00:03:54.954 ++ SPDK_RUN_UBSAN=1 00:03:54.954 ++ SPDK_TEST_RAID=1 00:03:54.954 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:54.954 ++ RUN_NIGHTLY=0 00:03:54.954 + cd /var/jenkins/workspace/raid-vg-autotest 00:03:54.954 + nvme_files=() 00:03:54.954 + declare -A nvme_files 00:03:54.954 + backend_dir=/var/lib/libvirt/images/backends 00:03:54.954 + nvme_files['nvme.img']=5G 00:03:54.954 + nvme_files['nvme-cmb.img']=5G 00:03:54.954 + nvme_files['nvme-multi0.img']=4G 00:03:54.954 + nvme_files['nvme-multi1.img']=4G 00:03:54.954 + nvme_files['nvme-multi2.img']=4G 00:03:54.954 + nvme_files['nvme-openstack.img']=8G 00:03:54.954 + nvme_files['nvme-zns.img']=5G 00:03:54.954 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:54.954 + (( SPDK_TEST_FTL == 1 )) 00:03:54.954 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:54.954 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:54.954 + for nvme in "${!nvme_files[@]}" 00:03:54.954 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:03:54.954 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:54.954 + for nvme in "${!nvme_files[@]}" 00:03:54.954 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:03:54.954 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:54.954 + for nvme in "${!nvme_files[@]}" 00:03:54.954 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:03:54.954 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:54.954 + for nvme in "${!nvme_files[@]}" 00:03:54.954 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:03:54.954 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:54.954 + for nvme in "${!nvme_files[@]}" 00:03:54.954 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:03:54.954 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:54.954 + for nvme in "${!nvme_files[@]}" 00:03:54.954 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:03:55.214 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:55.214 + for nvme in "${!nvme_files[@]}" 00:03:55.214 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:03:55.472 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:55.472 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:03:55.472 + echo 'End stage prepare_nvme.sh' 00:03:55.472 End stage prepare_nvme.sh 00:03:55.483 [Pipeline] sh 00:03:55.760 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:55.760 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:03:55.760 00:03:55.760 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:03:55.760 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:03:55.760 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:03:55.760 HELP=0 00:03:55.760 DRY_RUN=0 00:03:55.760 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:03:55.760 NVME_DISKS_TYPE=nvme,nvme, 00:03:55.760 NVME_AUTO_CREATE=0 00:03:55.760 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:03:55.760 NVME_CMB=,, 00:03:55.760 NVME_PMR=,, 00:03:55.760 NVME_ZNS=,, 00:03:55.760 NVME_MS=,, 00:03:55.760 NVME_FDP=,, 00:03:55.760 SPDK_VAGRANT_DISTRO=fedora39 00:03:55.760 SPDK_VAGRANT_VMCPU=10 00:03:55.760 SPDK_VAGRANT_VMRAM=12288 00:03:55.760 SPDK_VAGRANT_PROVIDER=libvirt 00:03:55.760 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:55.760 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:55.760 SPDK_OPENSTACK_NETWORK=0 00:03:55.760 VAGRANT_PACKAGE_BOX=0 00:03:55.760 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:55.760 FORCE_DISTRO=true 00:03:55.760 VAGRANT_BOX_VERSION= 00:03:55.760 EXTRA_VAGRANTFILES= 00:03:55.760 NIC_MODEL=e1000 00:03:55.760 00:03:55.760 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:03:55.760 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:03:59.052 Bringing machine 'default' up with 'libvirt' provider... 00:03:59.309 ==> default: Creating image (snapshot of base box volume). 00:03:59.567 ==> default: Creating domain with the following settings... 00:03:59.567 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732716209_51e96a39dfbbca592eeb 00:03:59.567 ==> default: -- Domain type: kvm 00:03:59.567 ==> default: -- Cpus: 10 00:03:59.567 ==> default: -- Feature: acpi 00:03:59.567 ==> default: -- Feature: apic 00:03:59.567 ==> default: -- Feature: pae 00:03:59.567 ==> default: -- Memory: 12288M 00:03:59.567 ==> default: -- Memory Backing: hugepages: 00:03:59.567 ==> default: -- Management MAC: 00:03:59.567 ==> default: -- Loader: 00:03:59.567 ==> default: -- Nvram: 00:03:59.567 ==> default: -- Base box: spdk/fedora39 00:03:59.567 ==> default: -- Storage pool: default 00:03:59.567 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732716209_51e96a39dfbbca592eeb.img (20G) 00:03:59.567 ==> default: -- Volume Cache: default 00:03:59.567 ==> default: -- Kernel: 00:03:59.567 ==> default: -- Initrd: 00:03:59.567 ==> default: -- Graphics Type: vnc 00:03:59.567 ==> default: -- Graphics Port: -1 00:03:59.567 ==> default: -- Graphics IP: 127.0.0.1 00:03:59.567 ==> default: -- Graphics Password: Not defined 00:03:59.567 ==> default: -- Video Type: cirrus 00:03:59.567 ==> default: -- Video VRAM: 9216 00:03:59.567 ==> default: -- Sound Type: 00:03:59.567 ==> default: -- Keymap: en-us 00:03:59.567 ==> default: -- TPM Path: 00:03:59.567 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:59.567 ==> default: -- Command line args: 00:03:59.567 ==> default: -> value=-device, 00:03:59.567 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:59.567 ==> default: -> value=-drive, 00:03:59.567 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:03:59.567 ==> default: -> value=-device, 00:03:59.567 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:59.567 ==> default: -> value=-device, 00:03:59.567 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:59.567 ==> default: -> value=-drive, 00:03:59.567 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:59.567 ==> default: -> value=-device, 00:03:59.567 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:59.567 ==> default: -> value=-drive, 00:03:59.567 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:59.567 ==> default: -> value=-device, 00:03:59.567 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:59.567 ==> default: -> value=-drive, 00:03:59.567 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:59.567 ==> default: -> value=-device, 00:03:59.567 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:59.825 ==> default: Creating shared folders metadata... 00:03:59.825 ==> default: Starting domain. 00:04:01.727 ==> default: Waiting for domain to get an IP address... 00:04:19.806 ==> default: Waiting for SSH to become available... 00:04:20.762 ==> default: Configuring and enabling network interfaces... 00:04:26.025 default: SSH address: 192.168.121.97:22 00:04:26.025 default: SSH username: vagrant 00:04:26.025 default: SSH auth method: private key 00:04:27.435 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:35.544 ==> default: Mounting SSHFS shared folder... 00:04:36.918 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:36.918 ==> default: Checking Mount.. 00:04:38.294 ==> default: Folder Successfully Mounted! 00:04:38.294 ==> default: Running provisioner: file... 00:04:39.231 default: ~/.gitconfig => .gitconfig 00:04:39.489 00:04:39.489 SUCCESS! 00:04:39.489 00:04:39.489 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:39.489 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:39.489 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:39.489 00:04:39.497 [Pipeline] } 00:04:39.513 [Pipeline] // stage 00:04:39.522 [Pipeline] dir 00:04:39.523 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:04:39.525 [Pipeline] { 00:04:39.537 [Pipeline] catchError 00:04:39.539 [Pipeline] { 00:04:39.548 [Pipeline] sh 00:04:39.820 + vagrant ssh-config --host vagrant 00:04:39.820 + sed -ne /^Host/,$p 00:04:39.820 + tee ssh_conf 00:04:43.141 Host vagrant 00:04:43.141 HostName 192.168.121.97 00:04:43.141 User vagrant 00:04:43.141 Port 22 00:04:43.141 UserKnownHostsFile /dev/null 00:04:43.141 StrictHostKeyChecking no 00:04:43.141 PasswordAuthentication no 00:04:43.141 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:43.141 IdentitiesOnly yes 00:04:43.141 LogLevel FATAL 00:04:43.141 ForwardAgent yes 00:04:43.141 ForwardX11 yes 00:04:43.141 00:04:43.153 [Pipeline] withEnv 00:04:43.155 [Pipeline] { 00:04:43.170 [Pipeline] sh 00:04:43.471 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:43.471 source /etc/os-release 00:04:43.471 [[ -e /image.version ]] && img=$(< /image.version) 00:04:43.471 # Minimal, systemd-like check. 00:04:43.471 if [[ -e /.dockerenv ]]; then 00:04:43.471 # Clear garbage from the node's name: 00:04:43.471 # agt-er_autotest_547-896 -> autotest_547-896 00:04:43.471 # $HOSTNAME is the actual container id 00:04:43.471 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:43.471 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:43.471 # We can assume this is a mount from a host where container is running, 00:04:43.471 # so fetch its hostname to easily identify the target swarm worker. 00:04:43.471 container="$(< /etc/hostname) ($agent)" 00:04:43.471 else 00:04:43.471 # Fallback 00:04:43.471 container=$agent 00:04:43.471 fi 00:04:43.471 fi 00:04:43.471 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:43.471 00:04:43.483 [Pipeline] } 00:04:43.501 [Pipeline] // withEnv 00:04:43.509 [Pipeline] setCustomBuildProperty 00:04:43.524 [Pipeline] stage 00:04:43.527 [Pipeline] { (Tests) 00:04:43.546 [Pipeline] sh 00:04:43.832 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:44.138 [Pipeline] sh 00:04:44.413 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:44.686 [Pipeline] timeout 00:04:44.686 Timeout set to expire in 1 hr 30 min 00:04:44.688 [Pipeline] { 00:04:44.702 [Pipeline] sh 00:04:44.980 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:45.548 HEAD is now at 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:04:45.558 [Pipeline] sh 00:04:45.841 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:46.110 [Pipeline] sh 00:04:46.386 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:46.660 [Pipeline] sh 00:04:46.944 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:47.204 ++ readlink -f spdk_repo 00:04:47.204 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:47.204 + [[ -n /home/vagrant/spdk_repo ]] 00:04:47.204 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:47.204 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:47.204 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:47.204 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:47.204 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:47.204 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:47.204 + cd /home/vagrant/spdk_repo 00:04:47.204 + source /etc/os-release 00:04:47.204 ++ NAME='Fedora Linux' 00:04:47.204 ++ VERSION='39 (Cloud Edition)' 00:04:47.204 ++ ID=fedora 00:04:47.204 ++ VERSION_ID=39 00:04:47.204 ++ VERSION_CODENAME= 00:04:47.204 ++ PLATFORM_ID=platform:f39 00:04:47.204 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:47.204 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:47.204 ++ LOGO=fedora-logo-icon 00:04:47.204 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:47.204 ++ HOME_URL=https://fedoraproject.org/ 00:04:47.204 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:47.204 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:47.204 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:47.204 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:47.204 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:47.204 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:47.204 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:47.204 ++ SUPPORT_END=2024-11-12 00:04:47.204 ++ VARIANT='Cloud Edition' 00:04:47.204 ++ VARIANT_ID=cloud 00:04:47.204 + uname -a 00:04:47.204 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:47.204 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:47.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.770 Hugepages 00:04:47.770 node hugesize free / total 00:04:47.770 node0 1048576kB 0 / 0 00:04:47.770 node0 2048kB 0 / 0 00:04:47.770 00:04:47.770 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.770 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:47.770 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:47.770 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:47.770 + rm -f /tmp/spdk-ld-path 00:04:47.770 + source autorun-spdk.conf 00:04:47.770 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:47.770 ++ SPDK_RUN_ASAN=1 00:04:47.770 ++ SPDK_RUN_UBSAN=1 00:04:47.770 ++ SPDK_TEST_RAID=1 00:04:47.770 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:47.770 ++ RUN_NIGHTLY=0 00:04:47.770 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:47.770 + [[ -n '' ]] 00:04:47.770 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:47.770 + for M in /var/spdk/build-*-manifest.txt 00:04:47.770 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:47.770 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:47.770 + for M in /var/spdk/build-*-manifest.txt 00:04:47.770 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:47.770 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:47.770 + for M in /var/spdk/build-*-manifest.txt 00:04:47.770 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:47.770 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:47.770 ++ uname 00:04:47.770 + [[ Linux == \L\i\n\u\x ]] 00:04:47.770 + sudo dmesg -T 00:04:47.770 + sudo dmesg --clear 00:04:47.770 + dmesg_pid=5267 00:04:47.770 + sudo dmesg -Tw 00:04:47.770 + [[ Fedora Linux == FreeBSD ]] 00:04:47.770 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:47.770 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:47.770 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:47.770 + [[ -x /usr/src/fio-static/fio ]] 00:04:47.770 + export FIO_BIN=/usr/src/fio-static/fio 00:04:47.770 + FIO_BIN=/usr/src/fio-static/fio 00:04:47.770 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:47.770 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:47.770 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:47.770 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:47.770 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:47.770 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:47.770 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:47.770 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:47.770 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:48.029 14:04:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:48.029 14:04:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:48.029 14:04:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:48.029 14:04:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:48.029 14:04:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:48.029 14:04:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:48.029 14:04:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:48.029 14:04:18 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:48.029 14:04:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:48.029 14:04:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:48.029 14:04:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:48.029 14:04:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.029 14:04:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:48.029 14:04:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:48.029 14:04:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.029 14:04:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.029 14:04:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.029 14:04:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.029 14:04:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.029 14:04:18 -- paths/export.sh@5 -- $ export PATH 00:04:48.029 14:04:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.029 14:04:18 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.029 14:04:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:48.029 14:04:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732716258.XXXXXX 00:04:48.029 14:04:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732716258.bXQKeH 00:04:48.029 14:04:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:48.029 14:04:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:48.029 14:04:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:48.029 14:04:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:48.029 14:04:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:48.029 14:04:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:48.029 14:04:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:48.029 14:04:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.029 14:04:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:48.029 14:04:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:48.029 14:04:18 -- pm/common@17 -- $ local monitor 00:04:48.029 14:04:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.029 14:04:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.029 14:04:18 -- pm/common@25 -- $ sleep 1 00:04:48.029 14:04:18 -- pm/common@21 -- $ date +%s 00:04:48.029 14:04:18 -- pm/common@21 -- $ date +%s 00:04:48.029 14:04:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732716258 00:04:48.029 14:04:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732716258 00:04:48.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732716258_collect-cpu-load.pm.log 00:04:48.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732716258_collect-vmstat.pm.log 00:04:48.964 14:04:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:48.964 14:04:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:48.964 14:04:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:48.964 14:04:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:48.964 14:04:19 -- spdk/autobuild.sh@16 -- $ date -u 00:04:48.964 Wed Nov 27 02:04:19 PM UTC 2024 00:04:48.964 14:04:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:48.964 v25.01-pre-274-g9094b9600 00:04:48.964 14:04:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:48.964 14:04:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:48.964 14:04:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:48.964 14:04:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:48.964 14:04:19 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.964 ************************************ 00:04:48.964 START TEST asan 00:04:48.964 ************************************ 00:04:48.964 using asan 00:04:48.964 14:04:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:48.964 00:04:48.964 real 0m0.000s 00:04:48.964 user 0m0.000s 00:04:48.964 sys 0m0.000s 00:04:48.965 14:04:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:48.965 14:04:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:48.965 ************************************ 00:04:48.965 END TEST asan 00:04:48.965 ************************************ 00:04:48.965 14:04:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:48.965 14:04:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:48.965 14:04:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:48.965 14:04:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:48.965 14:04:19 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.965 ************************************ 00:04:48.965 START TEST ubsan 00:04:48.965 ************************************ 00:04:48.965 using ubsan 00:04:48.965 14:04:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:48.965 00:04:48.965 real 0m0.000s 00:04:48.965 user 0m0.000s 00:04:48.965 sys 0m0.000s 00:04:48.965 14:04:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:48.965 14:04:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:48.965 ************************************ 00:04:48.965 END TEST ubsan 00:04:48.965 ************************************ 00:04:49.223 14:04:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:49.223 14:04:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:49.223 14:04:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:49.223 14:04:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:49.223 14:04:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:49.223 14:04:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:49.223 14:04:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:49.223 14:04:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:49.223 14:04:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:49.223 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:49.223 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:49.789 Using 'verbs' RDMA provider 00:05:02.952 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:17.834 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:17.834 Creating mk/config.mk...done. 00:05:17.834 Creating mk/cc.flags.mk...done. 00:05:17.834 Type 'make' to build. 00:05:17.834 14:04:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:17.834 14:04:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:17.834 14:04:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:17.834 14:04:46 -- common/autotest_common.sh@10 -- $ set +x 00:05:17.834 ************************************ 00:05:17.834 START TEST make 00:05:17.834 ************************************ 00:05:17.834 14:04:47 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:17.834 make[1]: Nothing to be done for 'all'. 00:05:32.774 The Meson build system 00:05:32.774 Version: 1.5.0 00:05:32.774 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:32.774 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:32.774 Build type: native build 00:05:32.774 Program cat found: YES (/usr/bin/cat) 00:05:32.774 Project name: DPDK 00:05:32.774 Project version: 24.03.0 00:05:32.774 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:32.774 C linker for the host machine: cc ld.bfd 2.40-14 00:05:32.774 Host machine cpu family: x86_64 00:05:32.774 Host machine cpu: x86_64 00:05:32.774 Message: ## Building in Developer Mode ## 00:05:32.774 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:32.774 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:32.774 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:32.774 Program python3 found: YES (/usr/bin/python3) 00:05:32.774 Program cat found: YES (/usr/bin/cat) 00:05:32.774 Compiler for C supports arguments -march=native: YES 00:05:32.774 Checking for size of "void *" : 8 00:05:32.774 Checking for size of "void *" : 8 (cached) 00:05:32.774 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:32.774 Library m found: YES 00:05:32.774 Library numa found: YES 00:05:32.774 Has header "numaif.h" : YES 00:05:32.774 Library fdt found: NO 00:05:32.774 Library execinfo found: NO 00:05:32.774 Has header "execinfo.h" : YES 00:05:32.774 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:32.774 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:32.774 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:32.774 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:32.774 Run-time dependency openssl found: YES 3.1.1 00:05:32.774 Run-time dependency libpcap found: YES 1.10.4 00:05:32.774 Has header "pcap.h" with dependency libpcap: YES 00:05:32.774 Compiler for C supports arguments -Wcast-qual: YES 00:05:32.774 Compiler for C supports arguments -Wdeprecated: YES 00:05:32.774 Compiler for C supports arguments -Wformat: YES 00:05:32.774 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:32.774 Compiler for C supports arguments -Wformat-security: NO 00:05:32.774 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:32.774 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:32.774 Compiler for C supports arguments -Wnested-externs: YES 00:05:32.774 Compiler for C supports arguments -Wold-style-definition: YES 00:05:32.774 Compiler for C supports arguments -Wpointer-arith: YES 00:05:32.774 Compiler for C supports arguments -Wsign-compare: YES 00:05:32.774 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:32.774 Compiler for C supports arguments -Wundef: YES 00:05:32.774 Compiler for C supports arguments -Wwrite-strings: YES 00:05:32.774 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:32.774 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:32.774 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:32.774 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:32.774 Program objdump found: YES (/usr/bin/objdump) 00:05:32.774 Compiler for C supports arguments -mavx512f: YES 00:05:32.774 Checking if "AVX512 checking" compiles: YES 00:05:32.774 Fetching value of define "__SSE4_2__" : 1 00:05:32.774 Fetching value of define "__AES__" : 1 00:05:32.774 Fetching value of define "__AVX__" : 1 00:05:32.774 Fetching value of define "__AVX2__" : 1 00:05:32.774 Fetching value of define "__AVX512BW__" : (undefined) 00:05:32.774 Fetching value of define "__AVX512CD__" : (undefined) 00:05:32.774 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:32.774 Fetching value of define "__AVX512F__" : (undefined) 00:05:32.774 Fetching value of define "__AVX512VL__" : (undefined) 00:05:32.774 Fetching value of define "__PCLMUL__" : 1 00:05:32.774 Fetching value of define "__RDRND__" : 1 00:05:32.774 Fetching value of define "__RDSEED__" : 1 00:05:32.774 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:32.774 Fetching value of define "__znver1__" : (undefined) 00:05:32.774 Fetching value of define "__znver2__" : (undefined) 00:05:32.774 Fetching value of define "__znver3__" : (undefined) 00:05:32.774 Fetching value of define "__znver4__" : (undefined) 00:05:32.774 Library asan found: YES 00:05:32.774 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:32.774 Message: lib/log: Defining dependency "log" 00:05:32.774 Message: lib/kvargs: Defining dependency "kvargs" 00:05:32.774 Message: lib/telemetry: Defining dependency "telemetry" 00:05:32.774 Library rt found: YES 00:05:32.774 Checking for function "getentropy" : NO 00:05:32.774 Message: lib/eal: Defining dependency "eal" 00:05:32.774 Message: lib/ring: Defining dependency "ring" 00:05:32.774 Message: lib/rcu: Defining dependency "rcu" 00:05:32.774 Message: lib/mempool: Defining dependency "mempool" 00:05:32.774 Message: lib/mbuf: Defining dependency "mbuf" 00:05:32.774 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:32.774 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:32.775 Compiler for C supports arguments -mpclmul: YES 00:05:32.775 Compiler for C supports arguments -maes: YES 00:05:32.775 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:32.775 Compiler for C supports arguments -mavx512bw: YES 00:05:32.775 Compiler for C supports arguments -mavx512dq: YES 00:05:32.775 Compiler for C supports arguments -mavx512vl: YES 00:05:32.775 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:32.775 Compiler for C supports arguments -mavx2: YES 00:05:32.775 Compiler for C supports arguments -mavx: YES 00:05:32.775 Message: lib/net: Defining dependency "net" 00:05:32.775 Message: lib/meter: Defining dependency "meter" 00:05:32.775 Message: lib/ethdev: Defining dependency "ethdev" 00:05:32.775 Message: lib/pci: Defining dependency "pci" 00:05:32.775 Message: lib/cmdline: Defining dependency "cmdline" 00:05:32.775 Message: lib/hash: Defining dependency "hash" 00:05:32.775 Message: lib/timer: Defining dependency "timer" 00:05:32.775 Message: lib/compressdev: Defining dependency "compressdev" 00:05:32.775 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:32.775 Message: lib/dmadev: Defining dependency "dmadev" 00:05:32.775 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:32.775 Message: lib/power: Defining dependency "power" 00:05:32.775 Message: lib/reorder: Defining dependency "reorder" 00:05:32.775 Message: lib/security: Defining dependency "security" 00:05:32.775 Has header "linux/userfaultfd.h" : YES 00:05:32.775 Has header "linux/vduse.h" : YES 00:05:32.775 Message: lib/vhost: Defining dependency "vhost" 00:05:32.775 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:32.775 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:32.775 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:32.775 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:32.775 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:32.775 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:32.775 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:32.775 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:32.775 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:32.775 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:32.775 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:32.775 Configuring doxy-api-html.conf using configuration 00:05:32.775 Configuring doxy-api-man.conf using configuration 00:05:32.775 Program mandb found: YES (/usr/bin/mandb) 00:05:32.775 Program sphinx-build found: NO 00:05:32.775 Configuring rte_build_config.h using configuration 00:05:32.775 Message: 00:05:32.775 ================= 00:05:32.775 Applications Enabled 00:05:32.775 ================= 00:05:32.775 00:05:32.775 apps: 00:05:32.775 00:05:32.775 00:05:32.775 Message: 00:05:32.775 ================= 00:05:32.775 Libraries Enabled 00:05:32.775 ================= 00:05:32.775 00:05:32.775 libs: 00:05:32.775 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:32.775 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:32.775 cryptodev, dmadev, power, reorder, security, vhost, 00:05:32.775 00:05:32.775 Message: 00:05:32.775 =============== 00:05:32.775 Drivers Enabled 00:05:32.775 =============== 00:05:32.775 00:05:32.775 common: 00:05:32.775 00:05:32.775 bus: 00:05:32.775 pci, vdev, 00:05:32.775 mempool: 00:05:32.775 ring, 00:05:32.775 dma: 00:05:32.775 00:05:32.775 net: 00:05:32.775 00:05:32.775 crypto: 00:05:32.775 00:05:32.775 compress: 00:05:32.775 00:05:32.775 vdpa: 00:05:32.775 00:05:32.775 00:05:32.775 Message: 00:05:32.775 ================= 00:05:32.775 Content Skipped 00:05:32.775 ================= 00:05:32.775 00:05:32.775 apps: 00:05:32.775 dumpcap: explicitly disabled via build config 00:05:32.775 graph: explicitly disabled via build config 00:05:32.775 pdump: explicitly disabled via build config 00:05:32.775 proc-info: explicitly disabled via build config 00:05:32.775 test-acl: explicitly disabled via build config 00:05:32.775 test-bbdev: explicitly disabled via build config 00:05:32.775 test-cmdline: explicitly disabled via build config 00:05:32.775 test-compress-perf: explicitly disabled via build config 00:05:32.775 test-crypto-perf: explicitly disabled via build config 00:05:32.775 test-dma-perf: explicitly disabled via build config 00:05:32.775 test-eventdev: explicitly disabled via build config 00:05:32.775 test-fib: explicitly disabled via build config 00:05:32.775 test-flow-perf: explicitly disabled via build config 00:05:32.775 test-gpudev: explicitly disabled via build config 00:05:32.775 test-mldev: explicitly disabled via build config 00:05:32.775 test-pipeline: explicitly disabled via build config 00:05:32.775 test-pmd: explicitly disabled via build config 00:05:32.775 test-regex: explicitly disabled via build config 00:05:32.775 test-sad: explicitly disabled via build config 00:05:32.775 test-security-perf: explicitly disabled via build config 00:05:32.775 00:05:32.775 libs: 00:05:32.775 argparse: explicitly disabled via build config 00:05:32.775 metrics: explicitly disabled via build config 00:05:32.775 acl: explicitly disabled via build config 00:05:32.775 bbdev: explicitly disabled via build config 00:05:32.775 bitratestats: explicitly disabled via build config 00:05:32.775 bpf: explicitly disabled via build config 00:05:32.775 cfgfile: explicitly disabled via build config 00:05:32.775 distributor: explicitly disabled via build config 00:05:32.775 efd: explicitly disabled via build config 00:05:32.775 eventdev: explicitly disabled via build config 00:05:32.775 dispatcher: explicitly disabled via build config 00:05:32.775 gpudev: explicitly disabled via build config 00:05:32.775 gro: explicitly disabled via build config 00:05:32.775 gso: explicitly disabled via build config 00:05:32.775 ip_frag: explicitly disabled via build config 00:05:32.775 jobstats: explicitly disabled via build config 00:05:32.775 latencystats: explicitly disabled via build config 00:05:32.775 lpm: explicitly disabled via build config 00:05:32.775 member: explicitly disabled via build config 00:05:32.775 pcapng: explicitly disabled via build config 00:05:32.775 rawdev: explicitly disabled via build config 00:05:32.775 regexdev: explicitly disabled via build config 00:05:32.775 mldev: explicitly disabled via build config 00:05:32.775 rib: explicitly disabled via build config 00:05:32.775 sched: explicitly disabled via build config 00:05:32.775 stack: explicitly disabled via build config 00:05:32.775 ipsec: explicitly disabled via build config 00:05:32.775 pdcp: explicitly disabled via build config 00:05:32.775 fib: explicitly disabled via build config 00:05:32.775 port: explicitly disabled via build config 00:05:32.775 pdump: explicitly disabled via build config 00:05:32.775 table: explicitly disabled via build config 00:05:32.775 pipeline: explicitly disabled via build config 00:05:32.775 graph: explicitly disabled via build config 00:05:32.775 node: explicitly disabled via build config 00:05:32.775 00:05:32.775 drivers: 00:05:32.775 common/cpt: not in enabled drivers build config 00:05:32.775 common/dpaax: not in enabled drivers build config 00:05:32.775 common/iavf: not in enabled drivers build config 00:05:32.775 common/idpf: not in enabled drivers build config 00:05:32.775 common/ionic: not in enabled drivers build config 00:05:32.775 common/mvep: not in enabled drivers build config 00:05:32.775 common/octeontx: not in enabled drivers build config 00:05:32.775 bus/auxiliary: not in enabled drivers build config 00:05:32.775 bus/cdx: not in enabled drivers build config 00:05:32.775 bus/dpaa: not in enabled drivers build config 00:05:32.775 bus/fslmc: not in enabled drivers build config 00:05:32.775 bus/ifpga: not in enabled drivers build config 00:05:32.775 bus/platform: not in enabled drivers build config 00:05:32.775 bus/uacce: not in enabled drivers build config 00:05:32.775 bus/vmbus: not in enabled drivers build config 00:05:32.775 common/cnxk: not in enabled drivers build config 00:05:32.775 common/mlx5: not in enabled drivers build config 00:05:32.775 common/nfp: not in enabled drivers build config 00:05:32.775 common/nitrox: not in enabled drivers build config 00:05:32.775 common/qat: not in enabled drivers build config 00:05:32.775 common/sfc_efx: not in enabled drivers build config 00:05:32.775 mempool/bucket: not in enabled drivers build config 00:05:32.775 mempool/cnxk: not in enabled drivers build config 00:05:32.775 mempool/dpaa: not in enabled drivers build config 00:05:32.775 mempool/dpaa2: not in enabled drivers build config 00:05:32.775 mempool/octeontx: not in enabled drivers build config 00:05:32.775 mempool/stack: not in enabled drivers build config 00:05:32.775 dma/cnxk: not in enabled drivers build config 00:05:32.775 dma/dpaa: not in enabled drivers build config 00:05:32.775 dma/dpaa2: not in enabled drivers build config 00:05:32.775 dma/hisilicon: not in enabled drivers build config 00:05:32.775 dma/idxd: not in enabled drivers build config 00:05:32.775 dma/ioat: not in enabled drivers build config 00:05:32.775 dma/skeleton: not in enabled drivers build config 00:05:32.775 net/af_packet: not in enabled drivers build config 00:05:32.775 net/af_xdp: not in enabled drivers build config 00:05:32.775 net/ark: not in enabled drivers build config 00:05:32.775 net/atlantic: not in enabled drivers build config 00:05:32.775 net/avp: not in enabled drivers build config 00:05:32.775 net/axgbe: not in enabled drivers build config 00:05:32.775 net/bnx2x: not in enabled drivers build config 00:05:32.775 net/bnxt: not in enabled drivers build config 00:05:32.775 net/bonding: not in enabled drivers build config 00:05:32.775 net/cnxk: not in enabled drivers build config 00:05:32.775 net/cpfl: not in enabled drivers build config 00:05:32.775 net/cxgbe: not in enabled drivers build config 00:05:32.775 net/dpaa: not in enabled drivers build config 00:05:32.775 net/dpaa2: not in enabled drivers build config 00:05:32.775 net/e1000: not in enabled drivers build config 00:05:32.775 net/ena: not in enabled drivers build config 00:05:32.775 net/enetc: not in enabled drivers build config 00:05:32.775 net/enetfec: not in enabled drivers build config 00:05:32.776 net/enic: not in enabled drivers build config 00:05:32.776 net/failsafe: not in enabled drivers build config 00:05:32.776 net/fm10k: not in enabled drivers build config 00:05:32.776 net/gve: not in enabled drivers build config 00:05:32.776 net/hinic: not in enabled drivers build config 00:05:32.776 net/hns3: not in enabled drivers build config 00:05:32.776 net/i40e: not in enabled drivers build config 00:05:32.776 net/iavf: not in enabled drivers build config 00:05:32.776 net/ice: not in enabled drivers build config 00:05:32.776 net/idpf: not in enabled drivers build config 00:05:32.776 net/igc: not in enabled drivers build config 00:05:32.776 net/ionic: not in enabled drivers build config 00:05:32.776 net/ipn3ke: not in enabled drivers build config 00:05:32.776 net/ixgbe: not in enabled drivers build config 00:05:32.776 net/mana: not in enabled drivers build config 00:05:32.776 net/memif: not in enabled drivers build config 00:05:32.776 net/mlx4: not in enabled drivers build config 00:05:32.776 net/mlx5: not in enabled drivers build config 00:05:32.776 net/mvneta: not in enabled drivers build config 00:05:32.776 net/mvpp2: not in enabled drivers build config 00:05:32.776 net/netvsc: not in enabled drivers build config 00:05:32.776 net/nfb: not in enabled drivers build config 00:05:32.776 net/nfp: not in enabled drivers build config 00:05:32.776 net/ngbe: not in enabled drivers build config 00:05:32.776 net/null: not in enabled drivers build config 00:05:32.776 net/octeontx: not in enabled drivers build config 00:05:32.776 net/octeon_ep: not in enabled drivers build config 00:05:32.776 net/pcap: not in enabled drivers build config 00:05:32.776 net/pfe: not in enabled drivers build config 00:05:32.776 net/qede: not in enabled drivers build config 00:05:32.776 net/ring: not in enabled drivers build config 00:05:32.776 net/sfc: not in enabled drivers build config 00:05:32.776 net/softnic: not in enabled drivers build config 00:05:32.776 net/tap: not in enabled drivers build config 00:05:32.776 net/thunderx: not in enabled drivers build config 00:05:32.776 net/txgbe: not in enabled drivers build config 00:05:32.776 net/vdev_netvsc: not in enabled drivers build config 00:05:32.776 net/vhost: not in enabled drivers build config 00:05:32.776 net/virtio: not in enabled drivers build config 00:05:32.776 net/vmxnet3: not in enabled drivers build config 00:05:32.776 raw/*: missing internal dependency, "rawdev" 00:05:32.776 crypto/armv8: not in enabled drivers build config 00:05:32.776 crypto/bcmfs: not in enabled drivers build config 00:05:32.776 crypto/caam_jr: not in enabled drivers build config 00:05:32.776 crypto/ccp: not in enabled drivers build config 00:05:32.776 crypto/cnxk: not in enabled drivers build config 00:05:32.776 crypto/dpaa_sec: not in enabled drivers build config 00:05:32.776 crypto/dpaa2_sec: not in enabled drivers build config 00:05:32.776 crypto/ipsec_mb: not in enabled drivers build config 00:05:32.776 crypto/mlx5: not in enabled drivers build config 00:05:32.776 crypto/mvsam: not in enabled drivers build config 00:05:32.776 crypto/nitrox: not in enabled drivers build config 00:05:32.776 crypto/null: not in enabled drivers build config 00:05:32.776 crypto/octeontx: not in enabled drivers build config 00:05:32.776 crypto/openssl: not in enabled drivers build config 00:05:32.776 crypto/scheduler: not in enabled drivers build config 00:05:32.776 crypto/uadk: not in enabled drivers build config 00:05:32.776 crypto/virtio: not in enabled drivers build config 00:05:32.776 compress/isal: not in enabled drivers build config 00:05:32.776 compress/mlx5: not in enabled drivers build config 00:05:32.776 compress/nitrox: not in enabled drivers build config 00:05:32.776 compress/octeontx: not in enabled drivers build config 00:05:32.776 compress/zlib: not in enabled drivers build config 00:05:32.776 regex/*: missing internal dependency, "regexdev" 00:05:32.776 ml/*: missing internal dependency, "mldev" 00:05:32.776 vdpa/ifc: not in enabled drivers build config 00:05:32.776 vdpa/mlx5: not in enabled drivers build config 00:05:32.776 vdpa/nfp: not in enabled drivers build config 00:05:32.776 vdpa/sfc: not in enabled drivers build config 00:05:32.776 event/*: missing internal dependency, "eventdev" 00:05:32.776 baseband/*: missing internal dependency, "bbdev" 00:05:32.776 gpu/*: missing internal dependency, "gpudev" 00:05:32.776 00:05:32.776 00:05:32.776 Build targets in project: 85 00:05:32.776 00:05:32.776 DPDK 24.03.0 00:05:32.776 00:05:32.776 User defined options 00:05:32.776 buildtype : debug 00:05:32.776 default_library : shared 00:05:32.776 libdir : lib 00:05:32.776 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:32.776 b_sanitize : address 00:05:32.776 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:32.776 c_link_args : 00:05:32.776 cpu_instruction_set: native 00:05:32.776 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:32.776 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:32.776 enable_docs : false 00:05:32.776 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:32.776 enable_kmods : false 00:05:32.776 max_lcores : 128 00:05:32.776 tests : false 00:05:32.776 00:05:32.776 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:32.776 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:32.776 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:32.776 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:32.776 [3/268] Linking static target lib/librte_kvargs.a 00:05:32.776 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:32.776 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:32.776 [6/268] Linking static target lib/librte_log.a 00:05:32.776 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:32.776 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.776 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:32.776 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:32.776 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:32.776 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:32.776 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:32.776 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:32.776 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:32.776 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:32.776 [17/268] Linking static target lib/librte_telemetry.a 00:05:32.776 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.776 [19/268] Linking target lib/librte_log.so.24.1 00:05:32.776 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:32.776 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:33.035 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:33.035 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:33.035 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:33.035 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:33.293 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:33.293 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:33.293 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:33.293 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:33.293 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.293 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:33.293 [32/268] Linking target lib/librte_telemetry.so.24.1 00:05:33.293 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:33.551 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:33.551 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:33.551 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:33.809 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:34.068 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:34.068 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:34.068 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:34.068 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:34.068 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:34.068 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:34.326 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:34.327 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:34.584 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:34.584 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:34.584 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:34.843 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:34.843 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:35.101 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:35.101 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:35.101 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:35.101 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:35.101 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:35.360 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:35.360 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:35.618 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:35.618 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:35.876 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:35.876 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:35.877 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:35.877 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:35.877 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:35.877 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:35.877 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:36.137 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:36.137 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:36.396 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:36.396 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:36.655 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:36.655 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:36.655 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:36.655 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:36.655 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:36.655 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:36.915 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:36.915 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:36.915 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:36.915 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:37.173 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:37.173 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:37.173 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:37.173 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:37.173 [85/268] Linking static target lib/librte_ring.a 00:05:37.432 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:37.432 [87/268] Linking static target lib/librte_eal.a 00:05:37.690 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:37.691 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:37.691 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:37.691 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.950 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:38.208 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:38.208 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:38.208 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:38.208 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:38.467 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:38.467 [98/268] Linking static target lib/librte_rcu.a 00:05:38.467 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:38.467 [100/268] Linking static target lib/librte_mempool.a 00:05:38.467 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:38.467 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:38.467 [103/268] Linking static target lib/librte_mbuf.a 00:05:38.725 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:38.725 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:38.725 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.982 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:38.982 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:38.982 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:38.982 [110/268] Linking static target lib/librte_meter.a 00:05:38.982 [111/268] Linking static target lib/librte_net.a 00:05:39.238 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:39.495 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:39.495 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.495 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:39.495 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.495 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.495 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.753 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:40.319 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:40.577 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:40.577 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:40.577 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:40.577 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:40.876 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:40.876 [126/268] Linking static target lib/librte_pci.a 00:05:40.876 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:40.876 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:40.876 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:41.134 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:41.134 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:41.134 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:41.134 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.134 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:41.392 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:41.392 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:41.392 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:41.392 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:41.392 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:41.392 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:41.651 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:41.651 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:41.651 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:41.651 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:41.651 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:41.651 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:42.218 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:42.218 [148/268] Linking static target lib/librte_cmdline.a 00:05:42.218 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:42.218 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:42.218 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:42.218 [152/268] Linking static target lib/librte_ethdev.a 00:05:42.218 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:42.477 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:42.477 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:42.477 [156/268] Linking static target lib/librte_timer.a 00:05:42.477 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:43.043 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:43.043 [159/268] Linking static target lib/librte_hash.a 00:05:43.043 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:43.043 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.302 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:43.302 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:43.302 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:43.302 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:43.302 [166/268] Linking static target lib/librte_compressdev.a 00:05:43.302 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:43.560 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:43.560 [169/268] Linking static target lib/librte_dmadev.a 00:05:43.819 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.820 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:43.820 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:43.820 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:44.079 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.079 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:44.338 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.338 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:44.338 [178/268] Linking static target lib/librte_cryptodev.a 00:05:44.597 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.597 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:44.597 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:44.597 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:44.597 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:44.855 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:45.114 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:45.114 [186/268] Linking static target lib/librte_power.a 00:05:45.373 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:45.373 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:45.373 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:45.373 [190/268] Linking static target lib/librte_reorder.a 00:05:45.631 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:45.631 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:45.889 [193/268] Linking static target lib/librte_security.a 00:05:45.889 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.147 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:46.405 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.663 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.663 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:46.663 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:46.663 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:46.921 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.210 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:47.210 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:47.210 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:47.210 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:47.468 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:47.726 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:47.726 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:47.726 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:47.984 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:47.984 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:47.984 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:47.984 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:47.984 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:47.984 [215/268] Linking static target drivers/librte_bus_pci.a 00:05:48.242 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:48.242 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:48.242 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:48.242 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:48.242 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:48.242 [221/268] Linking static target drivers/librte_bus_vdev.a 00:05:48.242 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:48.499 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:48.499 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:48.499 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:48.499 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.757 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.331 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:49.331 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.589 [230/268] Linking target lib/librte_eal.so.24.1 00:05:49.589 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:49.847 [232/268] Linking target lib/librte_timer.so.24.1 00:05:49.847 [233/268] Linking target lib/librte_pci.so.24.1 00:05:49.847 [234/268] Linking target lib/librte_dmadev.so.24.1 00:05:49.847 [235/268] Linking target lib/librte_ring.so.24.1 00:05:49.847 [236/268] Linking target lib/librte_meter.so.24.1 00:05:49.847 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:49.847 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:49.847 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:49.847 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:49.847 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:49.847 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:49.847 [243/268] Linking target lib/librte_rcu.so.24.1 00:05:49.847 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:50.104 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:50.104 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:50.104 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:50.104 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:50.104 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:50.362 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:50.362 [251/268] Linking target lib/librte_net.so.24.1 00:05:50.362 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:50.362 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:05:50.362 [254/268] Linking target lib/librte_reorder.so.24.1 00:05:50.619 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:50.619 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:50.619 [257/268] Linking target lib/librte_security.so.24.1 00:05:50.619 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:50.619 [259/268] Linking target lib/librte_hash.so.24.1 00:05:50.619 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.619 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:50.619 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:50.877 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:50.877 [264/268] Linking target lib/librte_power.so.24.1 00:05:54.161 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:54.161 [266/268] Linking static target lib/librte_vhost.a 00:05:55.601 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.601 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:55.601 INFO: autodetecting backend as ninja 00:05:55.601 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:17.522 CC lib/ut_mock/mock.o 00:06:17.522 CC lib/ut/ut.o 00:06:17.522 CC lib/log/log.o 00:06:17.522 CC lib/log/log_flags.o 00:06:17.522 CC lib/log/log_deprecated.o 00:06:17.522 LIB libspdk_ut_mock.a 00:06:17.522 LIB libspdk_ut.a 00:06:17.522 LIB libspdk_log.a 00:06:17.522 SO libspdk_ut_mock.so.6.0 00:06:17.522 SO libspdk_ut.so.2.0 00:06:17.522 SO libspdk_log.so.7.1 00:06:17.522 SYMLINK libspdk_ut_mock.so 00:06:17.522 SYMLINK libspdk_ut.so 00:06:17.522 SYMLINK libspdk_log.so 00:06:17.522 CC lib/dma/dma.o 00:06:17.522 CXX lib/trace_parser/trace.o 00:06:17.522 CC lib/util/base64.o 00:06:17.522 CC lib/util/bit_array.o 00:06:17.522 CC lib/util/cpuset.o 00:06:17.522 CC lib/ioat/ioat.o 00:06:17.522 CC lib/util/crc16.o 00:06:17.522 CC lib/util/crc32.o 00:06:17.522 CC lib/util/crc32c.o 00:06:17.832 CC lib/vfio_user/host/vfio_user_pci.o 00:06:17.832 CC lib/util/crc32_ieee.o 00:06:17.832 CC lib/util/crc64.o 00:06:17.832 CC lib/util/dif.o 00:06:17.832 CC lib/util/fd.o 00:06:17.832 CC lib/util/fd_group.o 00:06:17.832 CC lib/util/file.o 00:06:17.832 LIB libspdk_dma.a 00:06:17.832 CC lib/util/hexlify.o 00:06:18.090 SO libspdk_dma.so.5.0 00:06:18.090 CC lib/vfio_user/host/vfio_user.o 00:06:18.090 CC lib/util/iov.o 00:06:18.090 CC lib/util/math.o 00:06:18.090 SYMLINK libspdk_dma.so 00:06:18.090 LIB libspdk_ioat.a 00:06:18.090 CC lib/util/net.o 00:06:18.090 CC lib/util/pipe.o 00:06:18.090 SO libspdk_ioat.so.7.0 00:06:18.090 CC lib/util/strerror_tls.o 00:06:18.090 SYMLINK libspdk_ioat.so 00:06:18.090 CC lib/util/string.o 00:06:18.348 CC lib/util/uuid.o 00:06:18.348 CC lib/util/xor.o 00:06:18.348 LIB libspdk_vfio_user.a 00:06:18.348 SO libspdk_vfio_user.so.5.0 00:06:18.348 CC lib/util/zipf.o 00:06:18.348 CC lib/util/md5.o 00:06:18.348 SYMLINK libspdk_vfio_user.so 00:06:18.607 LIB libspdk_util.a 00:06:18.865 LIB libspdk_trace_parser.a 00:06:18.865 SO libspdk_util.so.10.1 00:06:18.865 SO libspdk_trace_parser.so.6.0 00:06:18.865 SYMLINK libspdk_util.so 00:06:19.123 SYMLINK libspdk_trace_parser.so 00:06:19.123 CC lib/vmd/vmd.o 00:06:19.123 CC lib/vmd/led.o 00:06:19.123 CC lib/rdma_utils/rdma_utils.o 00:06:19.123 CC lib/json/json_parse.o 00:06:19.123 CC lib/json/json_util.o 00:06:19.123 CC lib/json/json_write.o 00:06:19.123 CC lib/idxd/idxd.o 00:06:19.123 CC lib/idxd/idxd_user.o 00:06:19.123 CC lib/env_dpdk/env.o 00:06:19.123 CC lib/conf/conf.o 00:06:19.381 CC lib/env_dpdk/memory.o 00:06:19.381 LIB libspdk_conf.a 00:06:19.381 CC lib/env_dpdk/pci.o 00:06:19.639 SO libspdk_conf.so.6.0 00:06:19.639 CC lib/env_dpdk/init.o 00:06:19.639 SYMLINK libspdk_conf.so 00:06:19.639 CC lib/env_dpdk/threads.o 00:06:19.639 CC lib/idxd/idxd_kernel.o 00:06:19.639 LIB libspdk_rdma_utils.a 00:06:19.896 CC lib/env_dpdk/pci_ioat.o 00:06:19.897 LIB libspdk_json.a 00:06:19.897 SO libspdk_rdma_utils.so.1.0 00:06:19.897 SO libspdk_json.so.6.0 00:06:19.897 SYMLINK libspdk_rdma_utils.so 00:06:19.897 CC lib/env_dpdk/pci_virtio.o 00:06:19.897 SYMLINK libspdk_json.so 00:06:19.897 CC lib/env_dpdk/pci_vmd.o 00:06:19.897 CC lib/env_dpdk/pci_idxd.o 00:06:19.897 CC lib/env_dpdk/pci_event.o 00:06:20.154 CC lib/env_dpdk/sigbus_handler.o 00:06:20.154 CC lib/rdma_provider/common.o 00:06:20.154 CC lib/jsonrpc/jsonrpc_server.o 00:06:20.154 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:20.154 LIB libspdk_vmd.a 00:06:20.154 CC lib/jsonrpc/jsonrpc_client.o 00:06:20.154 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:20.154 LIB libspdk_idxd.a 00:06:20.154 SO libspdk_vmd.so.6.0 00:06:20.154 SO libspdk_idxd.so.12.1 00:06:20.154 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:20.154 CC lib/env_dpdk/pci_dpdk.o 00:06:20.154 SYMLINK libspdk_vmd.so 00:06:20.154 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:20.413 SYMLINK libspdk_idxd.so 00:06:20.413 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:20.413 LIB libspdk_jsonrpc.a 00:06:20.413 SO libspdk_jsonrpc.so.6.0 00:06:20.672 SYMLINK libspdk_jsonrpc.so 00:06:20.672 LIB libspdk_rdma_provider.a 00:06:20.672 SO libspdk_rdma_provider.so.7.0 00:06:20.672 SYMLINK libspdk_rdma_provider.so 00:06:20.930 CC lib/rpc/rpc.o 00:06:21.187 LIB libspdk_rpc.a 00:06:21.187 SO libspdk_rpc.so.6.0 00:06:21.187 SYMLINK libspdk_rpc.so 00:06:21.445 LIB libspdk_env_dpdk.a 00:06:21.445 SO libspdk_env_dpdk.so.15.1 00:06:21.445 CC lib/notify/notify.o 00:06:21.445 CC lib/notify/notify_rpc.o 00:06:21.445 CC lib/trace/trace.o 00:06:21.445 CC lib/trace/trace_rpc.o 00:06:21.445 CC lib/trace/trace_flags.o 00:06:21.445 CC lib/keyring/keyring.o 00:06:21.445 CC lib/keyring/keyring_rpc.o 00:06:21.702 SYMLINK libspdk_env_dpdk.so 00:06:21.702 LIB libspdk_notify.a 00:06:21.702 SO libspdk_notify.so.6.0 00:06:21.702 LIB libspdk_keyring.a 00:06:21.702 SYMLINK libspdk_notify.so 00:06:21.702 SO libspdk_keyring.so.2.0 00:06:21.702 LIB libspdk_trace.a 00:06:21.960 SYMLINK libspdk_keyring.so 00:06:21.960 SO libspdk_trace.so.11.0 00:06:21.960 SYMLINK libspdk_trace.so 00:06:22.218 CC lib/sock/sock.o 00:06:22.218 CC lib/sock/sock_rpc.o 00:06:22.218 CC lib/thread/thread.o 00:06:22.218 CC lib/thread/iobuf.o 00:06:22.784 LIB libspdk_sock.a 00:06:23.067 SO libspdk_sock.so.10.0 00:06:23.067 SYMLINK libspdk_sock.so 00:06:23.326 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:23.326 CC lib/nvme/nvme_fabric.o 00:06:23.326 CC lib/nvme/nvme_ctrlr.o 00:06:23.326 CC lib/nvme/nvme_ns_cmd.o 00:06:23.326 CC lib/nvme/nvme_ns.o 00:06:23.326 CC lib/nvme/nvme_pcie_common.o 00:06:23.326 CC lib/nvme/nvme_pcie.o 00:06:23.326 CC lib/nvme/nvme_qpair.o 00:06:23.326 CC lib/nvme/nvme.o 00:06:24.260 CC lib/nvme/nvme_quirks.o 00:06:24.518 CC lib/nvme/nvme_transport.o 00:06:24.776 CC lib/nvme/nvme_discovery.o 00:06:24.776 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:24.776 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:24.776 CC lib/nvme/nvme_tcp.o 00:06:25.033 CC lib/nvme/nvme_opal.o 00:06:25.291 LIB libspdk_thread.a 00:06:25.291 CC lib/nvme/nvme_io_msg.o 00:06:25.291 SO libspdk_thread.so.11.0 00:06:25.548 SYMLINK libspdk_thread.so 00:06:25.548 CC lib/nvme/nvme_poll_group.o 00:06:25.804 CC lib/init/json_config.o 00:06:25.804 CC lib/accel/accel.o 00:06:25.804 CC lib/blob/blobstore.o 00:06:25.805 CC lib/blob/request.o 00:06:26.062 CC lib/blob/zeroes.o 00:06:26.062 CC lib/init/subsystem.o 00:06:26.319 CC lib/accel/accel_rpc.o 00:06:26.319 CC lib/init/subsystem_rpc.o 00:06:26.319 CC lib/init/rpc.o 00:06:26.576 CC lib/blob/blob_bs_dev.o 00:06:26.576 CC lib/fsdev/fsdev.o 00:06:26.576 CC lib/accel/accel_sw.o 00:06:26.576 CC lib/virtio/virtio.o 00:06:26.576 LIB libspdk_init.a 00:06:26.576 SO libspdk_init.so.6.0 00:06:26.833 CC lib/fsdev/fsdev_io.o 00:06:26.833 SYMLINK libspdk_init.so 00:06:26.833 CC lib/nvme/nvme_zns.o 00:06:27.090 CC lib/nvme/nvme_stubs.o 00:06:27.090 CC lib/virtio/virtio_vhost_user.o 00:06:27.348 CC lib/event/app.o 00:06:27.348 CC lib/fsdev/fsdev_rpc.o 00:06:27.606 LIB libspdk_accel.a 00:06:27.606 CC lib/nvme/nvme_auth.o 00:06:27.606 SO libspdk_accel.so.16.0 00:06:27.606 SYMLINK libspdk_accel.so 00:06:27.606 CC lib/nvme/nvme_cuse.o 00:06:27.606 CC lib/virtio/virtio_vfio_user.o 00:06:27.864 LIB libspdk_fsdev.a 00:06:27.864 SO libspdk_fsdev.so.2.0 00:06:27.864 SYMLINK libspdk_fsdev.so 00:06:27.864 CC lib/event/reactor.o 00:06:27.864 CC lib/virtio/virtio_pci.o 00:06:28.122 CC lib/nvme/nvme_rdma.o 00:06:28.122 CC lib/event/log_rpc.o 00:06:28.122 CC lib/event/app_rpc.o 00:06:28.380 CC lib/event/scheduler_static.o 00:06:28.380 CC lib/bdev/bdev.o 00:06:28.380 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:28.638 LIB libspdk_virtio.a 00:06:28.638 CC lib/bdev/bdev_rpc.o 00:06:28.638 SO libspdk_virtio.so.7.0 00:06:28.638 CC lib/bdev/bdev_zone.o 00:06:28.897 SYMLINK libspdk_virtio.so 00:06:28.897 CC lib/bdev/part.o 00:06:28.897 LIB libspdk_event.a 00:06:28.897 SO libspdk_event.so.14.0 00:06:28.897 CC lib/bdev/scsi_nvme.o 00:06:29.156 SYMLINK libspdk_event.so 00:06:30.091 LIB libspdk_fuse_dispatcher.a 00:06:30.091 SO libspdk_fuse_dispatcher.so.1.0 00:06:30.091 SYMLINK libspdk_fuse_dispatcher.so 00:06:30.657 LIB libspdk_nvme.a 00:06:30.916 SO libspdk_nvme.so.15.0 00:06:31.484 SYMLINK libspdk_nvme.so 00:06:31.743 LIB libspdk_blob.a 00:06:31.743 SO libspdk_blob.so.12.0 00:06:32.001 SYMLINK libspdk_blob.so 00:06:32.259 CC lib/lvol/lvol.o 00:06:32.259 CC lib/blobfs/blobfs.o 00:06:32.259 CC lib/blobfs/tree.o 00:06:32.827 LIB libspdk_bdev.a 00:06:32.827 SO libspdk_bdev.so.17.0 00:06:33.086 SYMLINK libspdk_bdev.so 00:06:33.343 CC lib/ublk/ublk.o 00:06:33.344 CC lib/ublk/ublk_rpc.o 00:06:33.344 CC lib/nbd/nbd.o 00:06:33.344 CC lib/scsi/dev.o 00:06:33.344 CC lib/nbd/nbd_rpc.o 00:06:33.344 CC lib/scsi/lun.o 00:06:33.344 CC lib/ftl/ftl_core.o 00:06:33.344 CC lib/nvmf/ctrlr.o 00:06:33.601 LIB libspdk_blobfs.a 00:06:33.601 SO libspdk_blobfs.so.11.0 00:06:33.601 CC lib/nvmf/ctrlr_discovery.o 00:06:33.601 SYMLINK libspdk_blobfs.so 00:06:33.601 CC lib/nvmf/ctrlr_bdev.o 00:06:33.859 CC lib/scsi/port.o 00:06:33.859 CC lib/scsi/scsi.o 00:06:34.117 CC lib/nvmf/subsystem.o 00:06:34.117 LIB libspdk_lvol.a 00:06:34.117 CC lib/ftl/ftl_init.o 00:06:34.117 SO libspdk_lvol.so.11.0 00:06:34.117 CC lib/scsi/scsi_bdev.o 00:06:34.117 CC lib/ftl/ftl_layout.o 00:06:34.117 SYMLINK libspdk_lvol.so 00:06:34.375 CC lib/ftl/ftl_debug.o 00:06:34.375 LIB libspdk_ublk.a 00:06:34.375 LIB libspdk_nbd.a 00:06:34.375 SO libspdk_ublk.so.3.0 00:06:34.375 SO libspdk_nbd.so.7.0 00:06:34.375 SYMLINK libspdk_ublk.so 00:06:34.375 CC lib/ftl/ftl_io.o 00:06:34.375 CC lib/nvmf/nvmf.o 00:06:34.375 SYMLINK libspdk_nbd.so 00:06:34.375 CC lib/scsi/scsi_pr.o 00:06:34.633 CC lib/nvmf/nvmf_rpc.o 00:06:34.934 CC lib/nvmf/transport.o 00:06:34.934 CC lib/nvmf/tcp.o 00:06:34.934 CC lib/ftl/ftl_sb.o 00:06:34.934 CC lib/nvmf/stubs.o 00:06:35.206 CC lib/nvmf/mdns_server.o 00:06:35.206 CC lib/ftl/ftl_l2p.o 00:06:35.464 CC lib/scsi/scsi_rpc.o 00:06:35.722 CC lib/scsi/task.o 00:06:35.722 CC lib/nvmf/rdma.o 00:06:35.722 CC lib/ftl/ftl_l2p_flat.o 00:06:35.981 CC lib/nvmf/auth.o 00:06:35.981 LIB libspdk_scsi.a 00:06:36.239 CC lib/ftl/ftl_nv_cache.o 00:06:36.239 SO libspdk_scsi.so.9.0 00:06:36.239 CC lib/ftl/ftl_band.o 00:06:36.239 SYMLINK libspdk_scsi.so 00:06:36.239 CC lib/ftl/ftl_band_ops.o 00:06:36.239 CC lib/ftl/ftl_writer.o 00:06:36.806 CC lib/iscsi/conn.o 00:06:36.806 CC lib/iscsi/init_grp.o 00:06:36.806 CC lib/ftl/ftl_rq.o 00:06:36.806 CC lib/iscsi/iscsi.o 00:06:36.806 CC lib/vhost/vhost.o 00:06:37.064 CC lib/ftl/ftl_reloc.o 00:06:37.064 CC lib/iscsi/param.o 00:06:37.064 CC lib/vhost/vhost_rpc.o 00:06:37.323 CC lib/vhost/vhost_scsi.o 00:06:37.323 CC lib/ftl/ftl_l2p_cache.o 00:06:37.890 CC lib/ftl/ftl_p2l.o 00:06:37.890 CC lib/vhost/vhost_blk.o 00:06:38.148 CC lib/vhost/rte_vhost_user.o 00:06:38.148 CC lib/iscsi/portal_grp.o 00:06:38.148 CC lib/ftl/ftl_p2l_log.o 00:06:38.406 CC lib/iscsi/tgt_node.o 00:06:38.406 CC lib/ftl/mngt/ftl_mngt.o 00:06:38.406 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:38.406 CC lib/iscsi/iscsi_subsystem.o 00:06:38.665 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:38.665 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:38.665 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:38.922 CC lib/iscsi/iscsi_rpc.o 00:06:38.922 CC lib/iscsi/task.o 00:06:38.922 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:38.922 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:39.179 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:39.179 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:39.179 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:39.179 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:39.179 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:39.179 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:39.437 CC lib/ftl/utils/ftl_conf.o 00:06:39.437 CC lib/ftl/utils/ftl_md.o 00:06:39.437 CC lib/ftl/utils/ftl_mempool.o 00:06:39.695 CC lib/ftl/utils/ftl_bitmap.o 00:06:39.695 CC lib/ftl/utils/ftl_property.o 00:06:39.695 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:39.695 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:39.695 LIB libspdk_iscsi.a 00:06:39.695 LIB libspdk_nvmf.a 00:06:39.695 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:39.695 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:39.953 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:39.953 LIB libspdk_vhost.a 00:06:39.953 SO libspdk_iscsi.so.8.0 00:06:39.953 SO libspdk_nvmf.so.20.0 00:06:39.953 SO libspdk_vhost.so.8.0 00:06:39.953 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:40.211 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:40.211 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:40.211 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:40.211 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:40.211 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:40.211 SYMLINK libspdk_vhost.so 00:06:40.211 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:40.211 SYMLINK libspdk_iscsi.so 00:06:40.211 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:40.211 CC lib/ftl/base/ftl_base_dev.o 00:06:40.469 CC lib/ftl/base/ftl_base_bdev.o 00:06:40.469 CC lib/ftl/ftl_trace.o 00:06:40.469 SYMLINK libspdk_nvmf.so 00:06:40.727 LIB libspdk_ftl.a 00:06:40.985 SO libspdk_ftl.so.9.0 00:06:41.551 SYMLINK libspdk_ftl.so 00:06:41.808 CC module/env_dpdk/env_dpdk_rpc.o 00:06:42.134 CC module/keyring/file/keyring.o 00:06:42.134 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:42.134 CC module/accel/ioat/accel_ioat.o 00:06:42.134 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:42.134 CC module/blob/bdev/blob_bdev.o 00:06:42.134 CC module/fsdev/aio/fsdev_aio.o 00:06:42.134 CC module/sock/posix/posix.o 00:06:42.134 CC module/accel/error/accel_error.o 00:06:42.134 CC module/scheduler/gscheduler/gscheduler.o 00:06:42.134 LIB libspdk_env_dpdk_rpc.a 00:06:42.134 SO libspdk_env_dpdk_rpc.so.6.0 00:06:42.134 CC module/keyring/file/keyring_rpc.o 00:06:42.134 LIB libspdk_scheduler_dpdk_governor.a 00:06:42.134 SYMLINK libspdk_env_dpdk_rpc.so 00:06:42.134 LIB libspdk_scheduler_gscheduler.a 00:06:42.134 CC module/accel/error/accel_error_rpc.o 00:06:42.134 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:42.134 SO libspdk_scheduler_gscheduler.so.4.0 00:06:42.392 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:42.392 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:42.392 SYMLINK libspdk_scheduler_gscheduler.so 00:06:42.392 CC module/fsdev/aio/linux_aio_mgr.o 00:06:42.392 LIB libspdk_keyring_file.a 00:06:42.392 CC module/accel/ioat/accel_ioat_rpc.o 00:06:42.392 LIB libspdk_scheduler_dynamic.a 00:06:42.392 SO libspdk_keyring_file.so.2.0 00:06:42.392 LIB libspdk_accel_error.a 00:06:42.392 SO libspdk_scheduler_dynamic.so.4.0 00:06:42.392 SO libspdk_accel_error.so.2.0 00:06:42.392 SYMLINK libspdk_keyring_file.so 00:06:42.392 SYMLINK libspdk_scheduler_dynamic.so 00:06:42.650 SYMLINK libspdk_accel_error.so 00:06:42.650 LIB libspdk_blob_bdev.a 00:06:42.650 LIB libspdk_accel_ioat.a 00:06:42.650 SO libspdk_blob_bdev.so.12.0 00:06:42.650 SO libspdk_accel_ioat.so.6.0 00:06:42.650 CC module/accel/dsa/accel_dsa.o 00:06:42.650 CC module/accel/dsa/accel_dsa_rpc.o 00:06:42.650 SYMLINK libspdk_accel_ioat.so 00:06:42.650 SYMLINK libspdk_blob_bdev.so 00:06:42.650 CC module/keyring/linux/keyring.o 00:06:42.650 CC module/keyring/linux/keyring_rpc.o 00:06:42.650 CC module/accel/iaa/accel_iaa_rpc.o 00:06:42.650 CC module/accel/iaa/accel_iaa.o 00:06:42.908 LIB libspdk_keyring_linux.a 00:06:42.908 SO libspdk_keyring_linux.so.1.0 00:06:42.908 CC module/bdev/delay/vbdev_delay.o 00:06:42.908 CC module/bdev/error/vbdev_error.o 00:06:42.908 CC module/blobfs/bdev/blobfs_bdev.o 00:06:42.908 SYMLINK libspdk_keyring_linux.so 00:06:42.908 CC module/bdev/gpt/gpt.o 00:06:42.908 LIB libspdk_fsdev_aio.a 00:06:43.165 LIB libspdk_accel_iaa.a 00:06:43.165 SO libspdk_fsdev_aio.so.1.0 00:06:43.165 SO libspdk_accel_iaa.so.3.0 00:06:43.165 CC module/bdev/lvol/vbdev_lvol.o 00:06:43.165 LIB libspdk_accel_dsa.a 00:06:43.165 SYMLINK libspdk_fsdev_aio.so 00:06:43.165 SYMLINK libspdk_accel_iaa.so 00:06:43.165 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:43.165 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:43.165 CC module/bdev/malloc/bdev_malloc.o 00:06:43.165 SO libspdk_accel_dsa.so.5.0 00:06:43.165 CC module/bdev/error/vbdev_error_rpc.o 00:06:43.423 CC module/bdev/gpt/vbdev_gpt.o 00:06:43.423 SYMLINK libspdk_accel_dsa.so 00:06:43.423 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:43.423 LIB libspdk_blobfs_bdev.a 00:06:43.423 CC module/bdev/null/bdev_null.o 00:06:43.423 LIB libspdk_bdev_error.a 00:06:43.423 SO libspdk_blobfs_bdev.so.6.0 00:06:43.423 SO libspdk_bdev_error.so.6.0 00:06:43.423 LIB libspdk_sock_posix.a 00:06:43.423 CC module/bdev/null/bdev_null_rpc.o 00:06:43.681 SO libspdk_sock_posix.so.6.0 00:06:43.681 SYMLINK libspdk_bdev_error.so 00:06:43.681 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:43.681 SYMLINK libspdk_blobfs_bdev.so 00:06:43.681 LIB libspdk_bdev_gpt.a 00:06:43.681 SYMLINK libspdk_sock_posix.so 00:06:43.681 SO libspdk_bdev_gpt.so.6.0 00:06:43.681 LIB libspdk_bdev_malloc.a 00:06:43.681 SO libspdk_bdev_malloc.so.6.0 00:06:43.681 SYMLINK libspdk_bdev_gpt.so 00:06:43.681 CC module/bdev/nvme/bdev_nvme.o 00:06:43.939 SYMLINK libspdk_bdev_malloc.so 00:06:43.939 CC module/bdev/passthru/vbdev_passthru.o 00:06:43.939 LIB libspdk_bdev_delay.a 00:06:43.939 CC module/bdev/raid/bdev_raid.o 00:06:43.939 CC module/bdev/split/vbdev_split.o 00:06:43.939 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:43.939 LIB libspdk_bdev_null.a 00:06:43.939 SO libspdk_bdev_delay.so.6.0 00:06:43.939 SO libspdk_bdev_null.so.6.0 00:06:43.939 CC module/bdev/aio/bdev_aio.o 00:06:44.197 CC module/bdev/ftl/bdev_ftl.o 00:06:44.197 SYMLINK libspdk_bdev_delay.so 00:06:44.197 CC module/bdev/raid/bdev_raid_rpc.o 00:06:44.197 SYMLINK libspdk_bdev_null.so 00:06:44.197 LIB libspdk_bdev_lvol.a 00:06:44.197 SO libspdk_bdev_lvol.so.6.0 00:06:44.454 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:44.454 SYMLINK libspdk_bdev_lvol.so 00:06:44.454 CC module/bdev/iscsi/bdev_iscsi.o 00:06:44.454 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:44.454 CC module/bdev/split/vbdev_split_rpc.o 00:06:44.454 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:44.713 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:44.713 LIB libspdk_bdev_ftl.a 00:06:44.713 LIB libspdk_bdev_passthru.a 00:06:44.713 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:44.713 SO libspdk_bdev_ftl.so.6.0 00:06:44.713 SO libspdk_bdev_passthru.so.6.0 00:06:44.713 CC module/bdev/aio/bdev_aio_rpc.o 00:06:44.713 LIB libspdk_bdev_split.a 00:06:44.713 LIB libspdk_bdev_zone_block.a 00:06:44.713 SO libspdk_bdev_split.so.6.0 00:06:44.713 SYMLINK libspdk_bdev_ftl.so 00:06:44.971 SO libspdk_bdev_zone_block.so.6.0 00:06:44.971 CC module/bdev/raid/bdev_raid_sb.o 00:06:44.971 SYMLINK libspdk_bdev_passthru.so 00:06:44.971 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:44.971 SYMLINK libspdk_bdev_split.so 00:06:44.971 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:44.971 SYMLINK libspdk_bdev_zone_block.so 00:06:44.971 CC module/bdev/nvme/nvme_rpc.o 00:06:44.971 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:44.971 LIB libspdk_bdev_aio.a 00:06:45.229 SO libspdk_bdev_aio.so.6.0 00:06:45.229 LIB libspdk_bdev_iscsi.a 00:06:45.229 SYMLINK libspdk_bdev_aio.so 00:06:45.229 CC module/bdev/raid/raid0.o 00:06:45.229 SO libspdk_bdev_iscsi.so.6.0 00:06:45.229 CC module/bdev/nvme/bdev_mdns_client.o 00:06:45.229 CC module/bdev/nvme/vbdev_opal.o 00:06:45.487 SYMLINK libspdk_bdev_iscsi.so 00:06:45.487 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:45.487 LIB libspdk_bdev_virtio.a 00:06:45.487 CC module/bdev/raid/raid1.o 00:06:45.487 CC module/bdev/raid/concat.o 00:06:45.487 SO libspdk_bdev_virtio.so.6.0 00:06:45.745 SYMLINK libspdk_bdev_virtio.so 00:06:45.745 CC module/bdev/raid/raid5f.o 00:06:45.745 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:46.338 LIB libspdk_bdev_raid.a 00:06:46.597 SO libspdk_bdev_raid.so.6.0 00:06:46.597 SYMLINK libspdk_bdev_raid.so 00:06:47.971 LIB libspdk_bdev_nvme.a 00:06:47.971 SO libspdk_bdev_nvme.so.7.1 00:06:48.229 SYMLINK libspdk_bdev_nvme.so 00:06:48.794 CC module/event/subsystems/vmd/vmd.o 00:06:48.794 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:48.794 CC module/event/subsystems/keyring/keyring.o 00:06:48.794 CC module/event/subsystems/sock/sock.o 00:06:48.794 CC module/event/subsystems/scheduler/scheduler.o 00:06:48.794 CC module/event/subsystems/iobuf/iobuf.o 00:06:48.794 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:48.794 CC module/event/subsystems/fsdev/fsdev.o 00:06:48.794 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:48.794 LIB libspdk_event_fsdev.a 00:06:48.794 SO libspdk_event_fsdev.so.1.0 00:06:48.794 LIB libspdk_event_keyring.a 00:06:48.794 LIB libspdk_event_vmd.a 00:06:48.794 LIB libspdk_event_sock.a 00:06:48.794 SO libspdk_event_keyring.so.1.0 00:06:49.052 LIB libspdk_event_vhost_blk.a 00:06:49.052 SO libspdk_event_vmd.so.6.0 00:06:49.052 SO libspdk_event_sock.so.5.0 00:06:49.052 LIB libspdk_event_scheduler.a 00:06:49.052 SYMLINK libspdk_event_fsdev.so 00:06:49.052 LIB libspdk_event_iobuf.a 00:06:49.052 SO libspdk_event_vhost_blk.so.3.0 00:06:49.052 SO libspdk_event_scheduler.so.4.0 00:06:49.052 SO libspdk_event_iobuf.so.3.0 00:06:49.052 SYMLINK libspdk_event_keyring.so 00:06:49.052 SYMLINK libspdk_event_sock.so 00:06:49.052 SYMLINK libspdk_event_vmd.so 00:06:49.052 SYMLINK libspdk_event_vhost_blk.so 00:06:49.052 SYMLINK libspdk_event_scheduler.so 00:06:49.052 SYMLINK libspdk_event_iobuf.so 00:06:49.310 CC module/event/subsystems/accel/accel.o 00:06:49.569 LIB libspdk_event_accel.a 00:06:49.569 SO libspdk_event_accel.so.6.0 00:06:49.569 SYMLINK libspdk_event_accel.so 00:06:49.827 CC module/event/subsystems/bdev/bdev.o 00:06:50.085 LIB libspdk_event_bdev.a 00:06:50.085 SO libspdk_event_bdev.so.6.0 00:06:50.085 SYMLINK libspdk_event_bdev.so 00:06:50.343 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:50.343 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:50.343 CC module/event/subsystems/scsi/scsi.o 00:06:50.343 CC module/event/subsystems/ublk/ublk.o 00:06:50.343 CC module/event/subsystems/nbd/nbd.o 00:06:50.601 LIB libspdk_event_nbd.a 00:06:50.601 LIB libspdk_event_scsi.a 00:06:50.601 LIB libspdk_event_ublk.a 00:06:50.601 SO libspdk_event_nbd.so.6.0 00:06:50.601 SO libspdk_event_scsi.so.6.0 00:06:50.601 SO libspdk_event_ublk.so.3.0 00:06:50.601 SYMLINK libspdk_event_ublk.so 00:06:50.601 SYMLINK libspdk_event_scsi.so 00:06:50.601 SYMLINK libspdk_event_nbd.so 00:06:50.859 LIB libspdk_event_nvmf.a 00:06:50.859 SO libspdk_event_nvmf.so.6.0 00:06:50.859 CC module/event/subsystems/iscsi/iscsi.o 00:06:50.859 SYMLINK libspdk_event_nvmf.so 00:06:50.859 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:51.117 LIB libspdk_event_iscsi.a 00:06:51.117 LIB libspdk_event_vhost_scsi.a 00:06:51.117 SO libspdk_event_iscsi.so.6.0 00:06:51.117 SO libspdk_event_vhost_scsi.so.3.0 00:06:51.376 SYMLINK libspdk_event_vhost_scsi.so 00:06:51.376 SYMLINK libspdk_event_iscsi.so 00:06:51.376 SO libspdk.so.6.0 00:06:51.376 SYMLINK libspdk.so 00:06:51.634 CC app/trace_record/trace_record.o 00:06:51.634 CXX app/trace/trace.o 00:06:51.634 CC app/nvmf_tgt/nvmf_main.o 00:06:51.634 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:51.634 CC app/iscsi_tgt/iscsi_tgt.o 00:06:51.893 CC examples/ioat/perf/perf.o 00:06:51.893 CC examples/util/zipf/zipf.o 00:06:51.893 CC test/thread/poller_perf/poller_perf.o 00:06:51.893 CC test/app/bdev_svc/bdev_svc.o 00:06:51.893 CC test/dma/test_dma/test_dma.o 00:06:52.151 LINK nvmf_tgt 00:06:52.151 LINK interrupt_tgt 00:06:52.151 LINK poller_perf 00:06:52.151 LINK zipf 00:06:52.151 LINK iscsi_tgt 00:06:52.151 LINK spdk_trace_record 00:06:52.151 LINK bdev_svc 00:06:52.151 LINK ioat_perf 00:06:52.151 LINK spdk_trace 00:06:52.719 CC examples/ioat/verify/verify.o 00:06:52.719 TEST_HEADER include/spdk/accel.h 00:06:52.719 TEST_HEADER include/spdk/accel_module.h 00:06:52.719 TEST_HEADER include/spdk/assert.h 00:06:52.719 TEST_HEADER include/spdk/barrier.h 00:06:52.719 TEST_HEADER include/spdk/base64.h 00:06:52.719 TEST_HEADER include/spdk/bdev.h 00:06:52.719 TEST_HEADER include/spdk/bdev_module.h 00:06:52.719 TEST_HEADER include/spdk/bdev_zone.h 00:06:52.719 TEST_HEADER include/spdk/bit_array.h 00:06:52.719 TEST_HEADER include/spdk/bit_pool.h 00:06:52.719 TEST_HEADER include/spdk/blob_bdev.h 00:06:52.719 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:52.719 TEST_HEADER include/spdk/blobfs.h 00:06:52.719 CC app/spdk_lspci/spdk_lspci.o 00:06:52.719 TEST_HEADER include/spdk/blob.h 00:06:52.719 TEST_HEADER include/spdk/conf.h 00:06:52.719 TEST_HEADER include/spdk/config.h 00:06:52.719 TEST_HEADER include/spdk/cpuset.h 00:06:52.719 TEST_HEADER include/spdk/crc16.h 00:06:52.719 TEST_HEADER include/spdk/crc32.h 00:06:52.719 TEST_HEADER include/spdk/crc64.h 00:06:52.719 TEST_HEADER include/spdk/dif.h 00:06:52.719 TEST_HEADER include/spdk/dma.h 00:06:52.719 TEST_HEADER include/spdk/endian.h 00:06:52.719 TEST_HEADER include/spdk/env_dpdk.h 00:06:52.719 TEST_HEADER include/spdk/env.h 00:06:52.719 TEST_HEADER include/spdk/event.h 00:06:52.719 TEST_HEADER include/spdk/fd_group.h 00:06:52.719 TEST_HEADER include/spdk/fd.h 00:06:52.719 TEST_HEADER include/spdk/file.h 00:06:52.719 TEST_HEADER include/spdk/fsdev.h 00:06:52.719 TEST_HEADER include/spdk/fsdev_module.h 00:06:52.719 TEST_HEADER include/spdk/ftl.h 00:06:52.719 CC app/spdk_nvme_perf/perf.o 00:06:52.719 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:52.719 TEST_HEADER include/spdk/gpt_spec.h 00:06:52.719 TEST_HEADER include/spdk/hexlify.h 00:06:52.719 TEST_HEADER include/spdk/histogram_data.h 00:06:52.719 TEST_HEADER include/spdk/idxd.h 00:06:52.719 CC app/spdk_tgt/spdk_tgt.o 00:06:52.719 TEST_HEADER include/spdk/idxd_spec.h 00:06:52.719 TEST_HEADER include/spdk/init.h 00:06:52.719 TEST_HEADER include/spdk/ioat.h 00:06:52.719 TEST_HEADER include/spdk/ioat_spec.h 00:06:52.719 TEST_HEADER include/spdk/iscsi_spec.h 00:06:52.719 TEST_HEADER include/spdk/json.h 00:06:52.719 TEST_HEADER include/spdk/jsonrpc.h 00:06:52.719 TEST_HEADER include/spdk/keyring.h 00:06:52.719 TEST_HEADER include/spdk/keyring_module.h 00:06:52.719 TEST_HEADER include/spdk/likely.h 00:06:52.719 TEST_HEADER include/spdk/log.h 00:06:52.719 CC app/spdk_nvme_identify/identify.o 00:06:52.719 TEST_HEADER include/spdk/lvol.h 00:06:52.719 TEST_HEADER include/spdk/md5.h 00:06:52.719 TEST_HEADER include/spdk/memory.h 00:06:52.719 TEST_HEADER include/spdk/mmio.h 00:06:52.719 TEST_HEADER include/spdk/nbd.h 00:06:52.719 TEST_HEADER include/spdk/net.h 00:06:52.719 TEST_HEADER include/spdk/notify.h 00:06:52.719 TEST_HEADER include/spdk/nvme.h 00:06:52.719 TEST_HEADER include/spdk/nvme_intel.h 00:06:52.719 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:52.719 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:52.719 TEST_HEADER include/spdk/nvme_spec.h 00:06:52.719 TEST_HEADER include/spdk/nvme_zns.h 00:06:52.719 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:52.719 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:52.719 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:52.719 CC app/spdk_nvme_discover/discovery_aer.o 00:06:52.719 TEST_HEADER include/spdk/nvmf.h 00:06:52.719 TEST_HEADER include/spdk/nvmf_spec.h 00:06:52.978 TEST_HEADER include/spdk/nvmf_transport.h 00:06:52.978 TEST_HEADER include/spdk/opal.h 00:06:52.978 TEST_HEADER include/spdk/opal_spec.h 00:06:52.978 TEST_HEADER include/spdk/pci_ids.h 00:06:52.978 TEST_HEADER include/spdk/pipe.h 00:06:52.978 TEST_HEADER include/spdk/queue.h 00:06:52.978 TEST_HEADER include/spdk/reduce.h 00:06:52.978 TEST_HEADER include/spdk/rpc.h 00:06:52.978 TEST_HEADER include/spdk/scheduler.h 00:06:52.978 TEST_HEADER include/spdk/scsi.h 00:06:52.978 TEST_HEADER include/spdk/scsi_spec.h 00:06:52.978 LINK spdk_lspci 00:06:52.978 TEST_HEADER include/spdk/sock.h 00:06:52.978 TEST_HEADER include/spdk/stdinc.h 00:06:52.978 TEST_HEADER include/spdk/string.h 00:06:52.979 TEST_HEADER include/spdk/thread.h 00:06:52.979 TEST_HEADER include/spdk/trace.h 00:06:52.979 TEST_HEADER include/spdk/trace_parser.h 00:06:52.979 TEST_HEADER include/spdk/tree.h 00:06:52.979 TEST_HEADER include/spdk/ublk.h 00:06:52.979 TEST_HEADER include/spdk/util.h 00:06:52.979 TEST_HEADER include/spdk/uuid.h 00:06:52.979 TEST_HEADER include/spdk/version.h 00:06:52.979 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:52.979 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:52.979 TEST_HEADER include/spdk/vhost.h 00:06:52.979 LINK test_dma 00:06:52.979 TEST_HEADER include/spdk/vmd.h 00:06:52.979 TEST_HEADER include/spdk/xor.h 00:06:52.979 TEST_HEADER include/spdk/zipf.h 00:06:52.979 CXX test/cpp_headers/accel.o 00:06:52.979 LINK verify 00:06:53.237 LINK spdk_tgt 00:06:53.237 CC test/env/mem_callbacks/mem_callbacks.o 00:06:53.237 LINK spdk_nvme_discover 00:06:53.237 CXX test/cpp_headers/accel_module.o 00:06:53.237 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:53.495 CC test/env/vtophys/vtophys.o 00:06:53.495 LINK nvme_fuzz 00:06:53.495 CXX test/cpp_headers/assert.o 00:06:53.495 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:53.495 LINK vtophys 00:06:53.753 CC examples/sock/hello_world/hello_sock.o 00:06:53.753 CC examples/thread/thread/thread_ex.o 00:06:53.753 LINK env_dpdk_post_init 00:06:53.753 LINK mem_callbacks 00:06:53.753 CC test/env/memory/memory_ut.o 00:06:54.010 CXX test/cpp_headers/barrier.o 00:06:54.010 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:54.010 LINK thread 00:06:54.010 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:54.010 CC app/spdk_top/spdk_top.o 00:06:54.010 CXX test/cpp_headers/base64.o 00:06:54.268 LINK hello_sock 00:06:54.268 CXX test/cpp_headers/bdev.o 00:06:54.526 CC app/spdk_dd/spdk_dd.o 00:06:54.526 CC app/vhost/vhost.o 00:06:54.526 CXX test/cpp_headers/bdev_module.o 00:06:54.526 LINK vhost_fuzz 00:06:54.526 LINK spdk_nvme_perf 00:06:54.784 CC examples/vmd/lsvmd/lsvmd.o 00:06:54.784 LINK vhost 00:06:54.784 LINK spdk_dd 00:06:54.784 CXX test/cpp_headers/bdev_zone.o 00:06:54.784 LINK spdk_nvme_identify 00:06:55.042 LINK lsvmd 00:06:55.042 CC test/app/histogram_perf/histogram_perf.o 00:06:55.042 CXX test/cpp_headers/bit_array.o 00:06:55.042 CC app/fio/nvme/fio_plugin.o 00:06:55.300 LINK histogram_perf 00:06:55.300 CC test/env/pci/pci_ut.o 00:06:55.300 CXX test/cpp_headers/bit_pool.o 00:06:55.300 CC examples/vmd/led/led.o 00:06:55.558 CC test/event/event_perf/event_perf.o 00:06:55.558 CC test/nvme/aer/aer.o 00:06:55.558 CC test/event/reactor/reactor.o 00:06:55.558 CXX test/cpp_headers/blob_bdev.o 00:06:55.826 LINK led 00:06:55.826 LINK event_perf 00:06:55.826 LINK memory_ut 00:06:55.826 LINK reactor 00:06:55.826 LINK aer 00:06:56.083 LINK spdk_top 00:06:56.083 CXX test/cpp_headers/blobfs_bdev.o 00:06:56.083 CXX test/cpp_headers/blobfs.o 00:06:56.083 LINK spdk_nvme 00:06:56.083 CC test/event/reactor_perf/reactor_perf.o 00:06:56.083 LINK pci_ut 00:06:56.083 CC test/nvme/reset/reset.o 00:06:56.341 CC examples/idxd/perf/perf.o 00:06:56.341 LINK reactor_perf 00:06:56.341 CXX test/cpp_headers/blob.o 00:06:56.341 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:56.341 CC test/nvme/sgl/sgl.o 00:06:56.341 CC app/fio/bdev/fio_plugin.o 00:06:56.341 CC test/event/app_repeat/app_repeat.o 00:06:56.599 CXX test/cpp_headers/conf.o 00:06:56.599 LINK iscsi_fuzz 00:06:56.599 LINK reset 00:06:56.599 LINK app_repeat 00:06:56.599 CC test/event/scheduler/scheduler.o 00:06:56.599 CC test/rpc_client/rpc_client_test.o 00:06:56.599 LINK idxd_perf 00:06:56.856 CXX test/cpp_headers/config.o 00:06:56.856 LINK hello_fsdev 00:06:56.856 CXX test/cpp_headers/cpuset.o 00:06:56.856 CXX test/cpp_headers/crc16.o 00:06:56.856 LINK scheduler 00:06:56.856 CC test/app/jsoncat/jsoncat.o 00:06:56.856 LINK sgl 00:06:56.856 LINK rpc_client_test 00:06:56.856 CC test/nvme/e2edp/nvme_dp.o 00:06:57.119 CXX test/cpp_headers/crc32.o 00:06:57.119 CC test/accel/dif/dif.o 00:06:57.119 LINK spdk_bdev 00:06:57.119 CC test/nvme/overhead/overhead.o 00:06:57.119 LINK jsoncat 00:06:57.119 CC examples/accel/perf/accel_perf.o 00:06:57.119 CXX test/cpp_headers/crc64.o 00:06:57.375 CC test/nvme/err_injection/err_injection.o 00:06:57.375 CC test/app/stub/stub.o 00:06:57.375 CC test/nvme/startup/startup.o 00:06:57.375 CC test/nvme/reserve/reserve.o 00:06:57.375 CC test/blobfs/mkfs/mkfs.o 00:06:57.375 CXX test/cpp_headers/dif.o 00:06:57.375 LINK overhead 00:06:57.632 LINK err_injection 00:06:57.632 LINK nvme_dp 00:06:57.632 LINK startup 00:06:57.632 LINK reserve 00:06:57.632 CXX test/cpp_headers/dma.o 00:06:57.632 CXX test/cpp_headers/endian.o 00:06:57.632 LINK mkfs 00:06:57.632 LINK stub 00:06:57.632 CXX test/cpp_headers/env_dpdk.o 00:06:57.889 CXX test/cpp_headers/env.o 00:06:57.889 CXX test/cpp_headers/event.o 00:06:57.889 LINK accel_perf 00:06:57.889 CXX test/cpp_headers/fd_group.o 00:06:57.889 LINK dif 00:06:57.889 CC test/nvme/simple_copy/simple_copy.o 00:06:58.147 CXX test/cpp_headers/fd.o 00:06:58.147 CC examples/nvme/hello_world/hello_world.o 00:06:58.147 CC examples/blob/hello_world/hello_blob.o 00:06:58.147 CXX test/cpp_headers/file.o 00:06:58.147 CC examples/blob/cli/blobcli.o 00:06:58.147 CXX test/cpp_headers/fsdev.o 00:06:58.147 CC test/nvme/connect_stress/connect_stress.o 00:06:58.147 CC test/lvol/esnap/esnap.o 00:06:58.405 CC examples/nvme/reconnect/reconnect.o 00:06:58.405 LINK simple_copy 00:06:58.405 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:58.405 LINK hello_world 00:06:58.405 CXX test/cpp_headers/fsdev_module.o 00:06:58.405 LINK connect_stress 00:06:58.405 LINK hello_blob 00:06:58.405 CC examples/nvme/arbitration/arbitration.o 00:06:58.703 CXX test/cpp_headers/ftl.o 00:06:58.703 CC test/nvme/boot_partition/boot_partition.o 00:06:58.703 CC test/nvme/compliance/nvme_compliance.o 00:06:58.703 CC test/bdev/bdevio/bdevio.o 00:06:58.703 LINK reconnect 00:06:58.703 CXX test/cpp_headers/fuse_dispatcher.o 00:06:58.959 LINK blobcli 00:06:58.959 LINK arbitration 00:06:58.959 CC examples/bdev/hello_world/hello_bdev.o 00:06:58.959 LINK boot_partition 00:06:58.959 CXX test/cpp_headers/gpt_spec.o 00:06:58.959 LINK nvme_manage 00:06:59.216 CC examples/bdev/bdevperf/bdevperf.o 00:06:59.216 CC examples/nvme/hotplug/hotplug.o 00:06:59.216 LINK hello_bdev 00:06:59.216 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:59.216 LINK nvme_compliance 00:06:59.216 CXX test/cpp_headers/hexlify.o 00:06:59.216 CXX test/cpp_headers/histogram_data.o 00:06:59.216 LINK bdevio 00:06:59.216 CC examples/nvme/abort/abort.o 00:06:59.474 LINK cmb_copy 00:06:59.474 CXX test/cpp_headers/idxd.o 00:06:59.474 LINK hotplug 00:06:59.474 CC test/nvme/fused_ordering/fused_ordering.o 00:06:59.474 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:59.474 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:59.731 CC test/nvme/fdp/fdp.o 00:06:59.731 CXX test/cpp_headers/idxd_spec.o 00:06:59.731 CXX test/cpp_headers/init.o 00:06:59.731 CC test/nvme/cuse/cuse.o 00:06:59.731 LINK abort 00:06:59.731 LINK pmr_persistence 00:06:59.731 LINK doorbell_aers 00:06:59.731 LINK fused_ordering 00:06:59.988 CXX test/cpp_headers/ioat.o 00:06:59.988 CXX test/cpp_headers/ioat_spec.o 00:06:59.988 CXX test/cpp_headers/iscsi_spec.o 00:06:59.988 CXX test/cpp_headers/json.o 00:06:59.988 CXX test/cpp_headers/jsonrpc.o 00:06:59.988 CXX test/cpp_headers/keyring.o 00:06:59.988 LINK fdp 00:06:59.988 CXX test/cpp_headers/keyring_module.o 00:07:00.245 CXX test/cpp_headers/likely.o 00:07:00.245 CXX test/cpp_headers/log.o 00:07:00.245 CXX test/cpp_headers/lvol.o 00:07:00.245 LINK bdevperf 00:07:00.245 CXX test/cpp_headers/md5.o 00:07:00.245 CXX test/cpp_headers/memory.o 00:07:00.245 CXX test/cpp_headers/mmio.o 00:07:00.245 CXX test/cpp_headers/nbd.o 00:07:00.245 CXX test/cpp_headers/net.o 00:07:00.245 CXX test/cpp_headers/notify.o 00:07:00.245 CXX test/cpp_headers/nvme.o 00:07:00.245 CXX test/cpp_headers/nvme_intel.o 00:07:00.504 CXX test/cpp_headers/nvme_ocssd.o 00:07:00.504 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:00.504 CXX test/cpp_headers/nvme_spec.o 00:07:00.504 CXX test/cpp_headers/nvme_zns.o 00:07:00.504 CXX test/cpp_headers/nvmf_cmd.o 00:07:00.504 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:00.504 CXX test/cpp_headers/nvmf.o 00:07:00.762 CXX test/cpp_headers/nvmf_spec.o 00:07:00.762 CXX test/cpp_headers/nvmf_transport.o 00:07:00.762 CXX test/cpp_headers/opal.o 00:07:00.762 CC examples/nvmf/nvmf/nvmf.o 00:07:00.762 CXX test/cpp_headers/opal_spec.o 00:07:00.762 CXX test/cpp_headers/pci_ids.o 00:07:00.762 CXX test/cpp_headers/pipe.o 00:07:00.762 CXX test/cpp_headers/queue.o 00:07:00.762 CXX test/cpp_headers/reduce.o 00:07:00.762 CXX test/cpp_headers/rpc.o 00:07:00.762 CXX test/cpp_headers/scheduler.o 00:07:01.020 CXX test/cpp_headers/scsi.o 00:07:01.020 CXX test/cpp_headers/scsi_spec.o 00:07:01.020 CXX test/cpp_headers/sock.o 00:07:01.020 CXX test/cpp_headers/stdinc.o 00:07:01.020 CXX test/cpp_headers/string.o 00:07:01.020 CXX test/cpp_headers/thread.o 00:07:01.020 CXX test/cpp_headers/trace.o 00:07:01.020 LINK nvmf 00:07:01.278 CXX test/cpp_headers/trace_parser.o 00:07:01.278 CXX test/cpp_headers/tree.o 00:07:01.278 CXX test/cpp_headers/ublk.o 00:07:01.278 CXX test/cpp_headers/util.o 00:07:01.278 CXX test/cpp_headers/uuid.o 00:07:01.278 CXX test/cpp_headers/version.o 00:07:01.278 CXX test/cpp_headers/vfio_user_pci.o 00:07:01.278 CXX test/cpp_headers/vfio_user_spec.o 00:07:01.278 CXX test/cpp_headers/vhost.o 00:07:01.278 CXX test/cpp_headers/vmd.o 00:07:01.278 CXX test/cpp_headers/xor.o 00:07:01.536 CXX test/cpp_headers/zipf.o 00:07:01.536 LINK cuse 00:07:05.720 LINK esnap 00:07:06.289 00:07:06.289 real 1m49.532s 00:07:06.289 user 10m19.313s 00:07:06.289 sys 1m55.006s 00:07:06.289 14:06:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:06.289 ************************************ 00:07:06.289 END TEST make 00:07:06.289 14:06:36 make -- common/autotest_common.sh@10 -- $ set +x 00:07:06.289 ************************************ 00:07:06.289 14:06:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:06.289 14:06:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:06.289 14:06:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:06.289 14:06:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.289 14:06:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:06.289 14:06:36 -- pm/common@44 -- $ pid=5309 00:07:06.289 14:06:36 -- pm/common@50 -- $ kill -TERM 5309 00:07:06.289 14:06:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.289 14:06:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:06.289 14:06:36 -- pm/common@44 -- $ pid=5311 00:07:06.289 14:06:36 -- pm/common@50 -- $ kill -TERM 5311 00:07:06.289 14:06:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:06.289 14:06:36 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:06.289 14:06:36 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.289 14:06:36 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.289 14:06:36 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.289 14:06:36 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.289 14:06:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.289 14:06:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.289 14:06:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.289 14:06:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.289 14:06:36 -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.289 14:06:36 -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.289 14:06:36 -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.289 14:06:36 -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.289 14:06:36 -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.289 14:06:36 -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.289 14:06:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.289 14:06:36 -- scripts/common.sh@344 -- # case "$op" in 00:07:06.289 14:06:36 -- scripts/common.sh@345 -- # : 1 00:07:06.289 14:06:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.289 14:06:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.289 14:06:36 -- scripts/common.sh@365 -- # decimal 1 00:07:06.289 14:06:36 -- scripts/common.sh@353 -- # local d=1 00:07:06.289 14:06:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.289 14:06:36 -- scripts/common.sh@355 -- # echo 1 00:07:06.289 14:06:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.289 14:06:36 -- scripts/common.sh@366 -- # decimal 2 00:07:06.289 14:06:36 -- scripts/common.sh@353 -- # local d=2 00:07:06.289 14:06:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.289 14:06:36 -- scripts/common.sh@355 -- # echo 2 00:07:06.289 14:06:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.289 14:06:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.289 14:06:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.289 14:06:36 -- scripts/common.sh@368 -- # return 0 00:07:06.289 14:06:36 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.289 14:06:36 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.289 --rc genhtml_branch_coverage=1 00:07:06.289 --rc genhtml_function_coverage=1 00:07:06.289 --rc genhtml_legend=1 00:07:06.289 --rc geninfo_all_blocks=1 00:07:06.289 --rc geninfo_unexecuted_blocks=1 00:07:06.289 00:07:06.289 ' 00:07:06.289 14:06:36 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.289 --rc genhtml_branch_coverage=1 00:07:06.289 --rc genhtml_function_coverage=1 00:07:06.289 --rc genhtml_legend=1 00:07:06.289 --rc geninfo_all_blocks=1 00:07:06.289 --rc geninfo_unexecuted_blocks=1 00:07:06.289 00:07:06.289 ' 00:07:06.289 14:06:36 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.289 --rc genhtml_branch_coverage=1 00:07:06.289 --rc genhtml_function_coverage=1 00:07:06.289 --rc genhtml_legend=1 00:07:06.289 --rc geninfo_all_blocks=1 00:07:06.289 --rc geninfo_unexecuted_blocks=1 00:07:06.289 00:07:06.289 ' 00:07:06.289 14:06:36 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.289 --rc genhtml_branch_coverage=1 00:07:06.289 --rc genhtml_function_coverage=1 00:07:06.290 --rc genhtml_legend=1 00:07:06.290 --rc geninfo_all_blocks=1 00:07:06.290 --rc geninfo_unexecuted_blocks=1 00:07:06.290 00:07:06.290 ' 00:07:06.290 14:06:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.290 14:06:36 -- nvmf/common.sh@7 -- # uname -s 00:07:06.290 14:06:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.290 14:06:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.290 14:06:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.290 14:06:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.290 14:06:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.290 14:06:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.290 14:06:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.290 14:06:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.290 14:06:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.290 14:06:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.290 14:06:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5783c65c-7dfc-4d47-9814-200973a46653 00:07:06.290 14:06:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5783c65c-7dfc-4d47-9814-200973a46653 00:07:06.290 14:06:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.290 14:06:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.290 14:06:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:06.290 14:06:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.290 14:06:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.290 14:06:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.290 14:06:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.290 14:06:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.290 14:06:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.290 14:06:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.290 14:06:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.290 14:06:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.290 14:06:36 -- paths/export.sh@5 -- # export PATH 00:07:06.290 14:06:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.290 14:06:36 -- nvmf/common.sh@51 -- # : 0 00:07:06.290 14:06:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.290 14:06:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.290 14:06:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.290 14:06:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.290 14:06:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.290 14:06:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.290 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.290 14:06:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.290 14:06:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.290 14:06:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.290 14:06:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:06.290 14:06:36 -- spdk/autotest.sh@32 -- # uname -s 00:07:06.290 14:06:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:06.290 14:06:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:06.290 14:06:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:06.290 14:06:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:06.290 14:06:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:06.290 14:06:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:06.549 14:06:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:06.549 14:06:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:06.549 14:06:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:06.549 14:06:36 -- spdk/autotest.sh@48 -- # udevadm_pid=54503 00:07:06.549 14:06:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:06.549 14:06:36 -- pm/common@17 -- # local monitor 00:07:06.549 14:06:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.549 14:06:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.549 14:06:36 -- pm/common@21 -- # date +%s 00:07:06.549 14:06:36 -- pm/common@25 -- # sleep 1 00:07:06.549 14:06:36 -- pm/common@21 -- # date +%s 00:07:06.549 14:06:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732716396 00:07:06.549 14:06:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732716396 00:07:06.549 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732716396_collect-cpu-load.pm.log 00:07:06.549 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732716396_collect-vmstat.pm.log 00:07:07.485 14:06:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:07.485 14:06:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:07.485 14:06:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.485 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 14:06:37 -- spdk/autotest.sh@59 -- # create_test_list 00:07:07.485 14:06:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:07.485 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.485 14:06:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:07.485 14:06:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:07.485 14:06:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:07.485 14:06:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:07.485 14:06:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:07.485 14:06:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:07.485 14:06:37 -- common/autotest_common.sh@1457 -- # uname 00:07:07.485 14:06:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:07.485 14:06:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:07.485 14:06:37 -- common/autotest_common.sh@1477 -- # uname 00:07:07.485 14:06:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:07.485 14:06:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:07.485 14:06:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:07.743 lcov: LCOV version 1.15 00:07:07.743 14:06:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:25.824 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:25.824 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:43.968 14:07:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:43.968 14:07:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.968 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.968 14:07:12 -- spdk/autotest.sh@78 -- # rm -f 00:07:43.968 14:07:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.968 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:43.968 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:43.968 14:07:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:43.968 14:07:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:43.968 14:07:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:43.968 14:07:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:43.968 14:07:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:43.968 14:07:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:43.968 14:07:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:43.968 14:07:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:43.968 14:07:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:43.968 14:07:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:43.968 14:07:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:43.968 14:07:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:07:43.968 14:07:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:07:43.968 14:07:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:43.968 14:07:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:07:43.968 14:07:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:07:43.968 14:07:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:43.968 14:07:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:43.968 14:07:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:43.968 14:07:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:43.968 14:07:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:43.968 14:07:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:43.968 14:07:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:43.968 14:07:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:43.968 No valid GPT data, bailing 00:07:43.968 14:07:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:43.968 14:07:13 -- scripts/common.sh@394 -- # pt= 00:07:43.968 14:07:13 -- scripts/common.sh@395 -- # return 1 00:07:43.968 14:07:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:43.968 1+0 records in 00:07:43.968 1+0 records out 00:07:43.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415405 s, 252 MB/s 00:07:43.968 14:07:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:43.968 14:07:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:43.968 14:07:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:43.968 14:07:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:43.968 14:07:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:43.968 No valid GPT data, bailing 00:07:43.968 14:07:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:43.968 14:07:13 -- scripts/common.sh@394 -- # pt= 00:07:43.968 14:07:13 -- scripts/common.sh@395 -- # return 1 00:07:43.968 14:07:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:43.968 1+0 records in 00:07:43.968 1+0 records out 00:07:43.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424413 s, 247 MB/s 00:07:43.968 14:07:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:43.968 14:07:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:43.969 14:07:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:43.969 14:07:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:43.969 14:07:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:43.969 No valid GPT data, bailing 00:07:43.969 14:07:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:43.969 14:07:13 -- scripts/common.sh@394 -- # pt= 00:07:43.969 14:07:13 -- scripts/common.sh@395 -- # return 1 00:07:43.969 14:07:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:43.969 1+0 records in 00:07:43.969 1+0 records out 00:07:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039523 s, 265 MB/s 00:07:43.969 14:07:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:43.969 14:07:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:43.969 14:07:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:43.969 14:07:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:43.969 14:07:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:43.969 No valid GPT data, bailing 00:07:43.969 14:07:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:43.969 14:07:13 -- scripts/common.sh@394 -- # pt= 00:07:43.969 14:07:13 -- scripts/common.sh@395 -- # return 1 00:07:43.969 14:07:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:43.969 1+0 records in 00:07:43.969 1+0 records out 00:07:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404658 s, 259 MB/s 00:07:43.969 14:07:13 -- spdk/autotest.sh@105 -- # sync 00:07:43.969 14:07:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:43.969 14:07:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:43.969 14:07:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:44.904 14:07:15 -- spdk/autotest.sh@111 -- # uname -s 00:07:44.904 14:07:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:44.904 14:07:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:44.904 14:07:15 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:45.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.471 Hugepages 00:07:45.729 node hugesize free / total 00:07:45.729 node0 1048576kB 0 / 0 00:07:45.729 node0 2048kB 0 / 0 00:07:45.729 00:07:45.729 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:45.729 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:45.729 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:45.729 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:45.729 14:07:16 -- spdk/autotest.sh@117 -- # uname -s 00:07:45.729 14:07:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:45.729 14:07:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:45.729 14:07:16 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:46.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.663 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.663 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.663 14:07:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:47.611 14:07:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:47.611 14:07:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:47.611 14:07:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:47.611 14:07:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:47.611 14:07:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:47.611 14:07:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:47.611 14:07:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.611 14:07:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:47.611 14:07:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.869 14:07:18 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:47.869 14:07:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:47.869 14:07:18 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:48.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.128 Waiting for block devices as requested 00:07:48.128 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.128 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.386 14:07:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:48.386 14:07:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:48.386 14:07:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:48.386 14:07:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:48.386 14:07:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:48.386 14:07:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1543 -- # continue 00:07:48.386 14:07:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:48.386 14:07:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:48.386 14:07:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:48.386 14:07:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:48.386 14:07:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:48.386 14:07:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:48.386 14:07:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:48.386 14:07:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:48.386 14:07:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:48.386 14:07:18 -- common/autotest_common.sh@1543 -- # continue 00:07:48.386 14:07:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:48.386 14:07:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.386 14:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.386 14:07:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:48.386 14:07:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.386 14:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.386 14:07:18 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:48.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.210 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.210 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.210 14:07:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:49.210 14:07:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:49.210 14:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:49.210 14:07:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:49.210 14:07:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:49.210 14:07:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:49.210 14:07:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:49.210 14:07:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:49.210 14:07:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:49.210 14:07:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:49.210 14:07:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:49.210 14:07:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:49.210 14:07:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:49.210 14:07:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:49.210 14:07:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:49.210 14:07:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:49.468 14:07:19 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:49.468 14:07:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:49.468 14:07:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:49.468 14:07:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:49.468 14:07:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:49.468 14:07:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:49.468 14:07:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:49.468 14:07:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:49.468 14:07:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:49.468 14:07:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:49.468 14:07:19 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:49.468 14:07:19 -- common/autotest_common.sh@1572 -- # return 0 00:07:49.468 14:07:19 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:49.468 14:07:19 -- common/autotest_common.sh@1580 -- # return 0 00:07:49.468 14:07:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:49.468 14:07:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:49.468 14:07:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:49.468 14:07:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:49.468 14:07:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:49.468 14:07:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.468 14:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 14:07:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:49.468 14:07:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:49.468 14:07:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.468 14:07:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.468 14:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 ************************************ 00:07:49.468 START TEST env 00:07:49.468 ************************************ 00:07:49.468 14:07:19 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:49.468 * Looking for test storage... 00:07:49.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:49.468 14:07:19 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.468 14:07:19 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.468 14:07:19 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.468 14:07:19 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.727 14:07:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.727 14:07:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.727 14:07:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.727 14:07:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.727 14:07:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.727 14:07:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.727 14:07:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.727 14:07:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.727 14:07:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.727 14:07:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.727 14:07:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.727 14:07:19 env -- scripts/common.sh@344 -- # case "$op" in 00:07:49.727 14:07:19 env -- scripts/common.sh@345 -- # : 1 00:07:49.727 14:07:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.727 14:07:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.727 14:07:19 env -- scripts/common.sh@365 -- # decimal 1 00:07:49.727 14:07:19 env -- scripts/common.sh@353 -- # local d=1 00:07:49.727 14:07:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.727 14:07:19 env -- scripts/common.sh@355 -- # echo 1 00:07:49.727 14:07:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.727 14:07:19 env -- scripts/common.sh@366 -- # decimal 2 00:07:49.727 14:07:19 env -- scripts/common.sh@353 -- # local d=2 00:07:49.727 14:07:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.727 14:07:19 env -- scripts/common.sh@355 -- # echo 2 00:07:49.727 14:07:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.727 14:07:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.727 14:07:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.727 14:07:19 env -- scripts/common.sh@368 -- # return 0 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.727 --rc genhtml_branch_coverage=1 00:07:49.727 --rc genhtml_function_coverage=1 00:07:49.727 --rc genhtml_legend=1 00:07:49.727 --rc geninfo_all_blocks=1 00:07:49.727 --rc geninfo_unexecuted_blocks=1 00:07:49.727 00:07:49.727 ' 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.727 --rc genhtml_branch_coverage=1 00:07:49.727 --rc genhtml_function_coverage=1 00:07:49.727 --rc genhtml_legend=1 00:07:49.727 --rc geninfo_all_blocks=1 00:07:49.727 --rc geninfo_unexecuted_blocks=1 00:07:49.727 00:07:49.727 ' 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.727 --rc genhtml_branch_coverage=1 00:07:49.727 --rc genhtml_function_coverage=1 00:07:49.727 --rc genhtml_legend=1 00:07:49.727 --rc geninfo_all_blocks=1 00:07:49.727 --rc geninfo_unexecuted_blocks=1 00:07:49.727 00:07:49.727 ' 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.727 --rc genhtml_branch_coverage=1 00:07:49.727 --rc genhtml_function_coverage=1 00:07:49.727 --rc genhtml_legend=1 00:07:49.727 --rc geninfo_all_blocks=1 00:07:49.727 --rc geninfo_unexecuted_blocks=1 00:07:49.727 00:07:49.727 ' 00:07:49.727 14:07:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.727 14:07:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.727 14:07:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:49.727 ************************************ 00:07:49.727 START TEST env_memory 00:07:49.727 ************************************ 00:07:49.727 14:07:20 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:49.727 00:07:49.727 00:07:49.727 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.727 http://cunit.sourceforge.net/ 00:07:49.727 00:07:49.727 00:07:49.727 Suite: memory 00:07:49.727 Test: alloc and free memory map ...[2024-11-27 14:07:20.094205] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:49.727 passed 00:07:49.727 Test: mem map translation ...[2024-11-27 14:07:20.176509] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:49.727 [2024-11-27 14:07:20.176709] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:49.727 [2024-11-27 14:07:20.176853] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:49.727 [2024-11-27 14:07:20.176907] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:49.987 passed 00:07:49.987 Test: mem map registration ...[2024-11-27 14:07:20.259605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:49.987 [2024-11-27 14:07:20.259745] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:49.987 passed 00:07:49.987 Test: mem map adjacent registrations ...passed 00:07:49.987 00:07:49.987 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.987 suites 1 1 n/a 0 0 00:07:49.987 tests 4 4 4 0 0 00:07:49.987 asserts 152 152 152 0 n/a 00:07:49.987 00:07:49.987 Elapsed time = 0.333 seconds 00:07:49.987 00:07:49.987 real 0m0.378s 00:07:49.987 user 0m0.341s 00:07:49.987 sys 0m0.027s 00:07:49.987 14:07:20 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.987 ************************************ 00:07:49.987 END TEST env_memory 00:07:49.987 ************************************ 00:07:49.987 14:07:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:49.987 14:07:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:49.987 14:07:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.987 14:07:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.987 14:07:20 env -- common/autotest_common.sh@10 -- # set +x 00:07:49.987 ************************************ 00:07:49.987 START TEST env_vtophys 00:07:49.987 ************************************ 00:07:49.987 14:07:20 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:49.987 EAL: lib.eal log level changed from notice to debug 00:07:49.987 EAL: Detected lcore 0 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 1 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 2 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 3 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 4 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 5 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 6 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 7 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 8 as core 0 on socket 0 00:07:49.987 EAL: Detected lcore 9 as core 0 on socket 0 00:07:49.987 EAL: Maximum logical cores by configuration: 128 00:07:49.987 EAL: Detected CPU lcores: 10 00:07:49.987 EAL: Detected NUMA nodes: 1 00:07:49.987 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:49.987 EAL: Detected shared linkage of DPDK 00:07:50.245 EAL: No shared files mode enabled, IPC will be disabled 00:07:50.245 EAL: Selected IOVA mode 'PA' 00:07:50.245 EAL: Probing VFIO support... 00:07:50.245 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:50.245 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:50.245 EAL: Ask a virtual area of 0x2e000 bytes 00:07:50.245 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:50.245 EAL: Setting up physically contiguous memory... 00:07:50.245 EAL: Setting maximum number of open files to 524288 00:07:50.245 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:50.245 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:50.245 EAL: Ask a virtual area of 0x61000 bytes 00:07:50.245 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:50.245 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:50.245 EAL: Ask a virtual area of 0x400000000 bytes 00:07:50.245 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:50.245 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:50.245 EAL: Ask a virtual area of 0x61000 bytes 00:07:50.245 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:50.245 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:50.245 EAL: Ask a virtual area of 0x400000000 bytes 00:07:50.245 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:50.245 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:50.245 EAL: Ask a virtual area of 0x61000 bytes 00:07:50.245 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:50.245 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:50.245 EAL: Ask a virtual area of 0x400000000 bytes 00:07:50.245 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:50.245 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:50.245 EAL: Ask a virtual area of 0x61000 bytes 00:07:50.245 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:50.245 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:50.245 EAL: Ask a virtual area of 0x400000000 bytes 00:07:50.245 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:50.245 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:50.245 EAL: Hugepages will be freed exactly as allocated. 00:07:50.245 EAL: No shared files mode enabled, IPC is disabled 00:07:50.245 EAL: No shared files mode enabled, IPC is disabled 00:07:50.245 EAL: TSC frequency is ~2200000 KHz 00:07:50.245 EAL: Main lcore 0 is ready (tid=7f19bfe33a40;cpuset=[0]) 00:07:50.245 EAL: Trying to obtain current memory policy. 00:07:50.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.245 EAL: Restoring previous memory policy: 0 00:07:50.245 EAL: request: mp_malloc_sync 00:07:50.245 EAL: No shared files mode enabled, IPC is disabled 00:07:50.245 EAL: Heap on socket 0 was expanded by 2MB 00:07:50.245 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:50.245 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:50.245 EAL: Mem event callback 'spdk:(nil)' registered 00:07:50.245 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:50.245 00:07:50.245 00:07:50.245 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.245 http://cunit.sourceforge.net/ 00:07:50.245 00:07:50.245 00:07:50.245 Suite: components_suite 00:07:50.812 Test: vtophys_malloc_test ...passed 00:07:50.812 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:50.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.812 EAL: Restoring previous memory policy: 4 00:07:50.812 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.812 EAL: request: mp_malloc_sync 00:07:50.812 EAL: No shared files mode enabled, IPC is disabled 00:07:50.812 EAL: Heap on socket 0 was expanded by 4MB 00:07:50.812 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.812 EAL: request: mp_malloc_sync 00:07:50.812 EAL: No shared files mode enabled, IPC is disabled 00:07:50.812 EAL: Heap on socket 0 was shrunk by 4MB 00:07:50.812 EAL: Trying to obtain current memory policy. 00:07:50.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.812 EAL: Restoring previous memory policy: 4 00:07:50.812 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.812 EAL: request: mp_malloc_sync 00:07:50.812 EAL: No shared files mode enabled, IPC is disabled 00:07:50.812 EAL: Heap on socket 0 was expanded by 6MB 00:07:50.812 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.812 EAL: request: mp_malloc_sync 00:07:50.812 EAL: No shared files mode enabled, IPC is disabled 00:07:50.812 EAL: Heap on socket 0 was shrunk by 6MB 00:07:50.812 EAL: Trying to obtain current memory policy. 00:07:50.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.812 EAL: Restoring previous memory policy: 4 00:07:50.812 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.812 EAL: request: mp_malloc_sync 00:07:50.812 EAL: No shared files mode enabled, IPC is disabled 00:07:50.812 EAL: Heap on socket 0 was expanded by 10MB 00:07:50.812 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.813 EAL: request: mp_malloc_sync 00:07:50.813 EAL: No shared files mode enabled, IPC is disabled 00:07:50.813 EAL: Heap on socket 0 was shrunk by 10MB 00:07:50.813 EAL: Trying to obtain current memory policy. 00:07:50.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.813 EAL: Restoring previous memory policy: 4 00:07:50.813 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.813 EAL: request: mp_malloc_sync 00:07:50.813 EAL: No shared files mode enabled, IPC is disabled 00:07:50.813 EAL: Heap on socket 0 was expanded by 18MB 00:07:50.813 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.813 EAL: request: mp_malloc_sync 00:07:50.813 EAL: No shared files mode enabled, IPC is disabled 00:07:50.813 EAL: Heap on socket 0 was shrunk by 18MB 00:07:50.813 EAL: Trying to obtain current memory policy. 00:07:50.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.813 EAL: Restoring previous memory policy: 4 00:07:50.813 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.813 EAL: request: mp_malloc_sync 00:07:50.813 EAL: No shared files mode enabled, IPC is disabled 00:07:50.813 EAL: Heap on socket 0 was expanded by 34MB 00:07:51.071 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.071 EAL: request: mp_malloc_sync 00:07:51.071 EAL: No shared files mode enabled, IPC is disabled 00:07:51.071 EAL: Heap on socket 0 was shrunk by 34MB 00:07:51.071 EAL: Trying to obtain current memory policy. 00:07:51.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:51.071 EAL: Restoring previous memory policy: 4 00:07:51.071 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.071 EAL: request: mp_malloc_sync 00:07:51.071 EAL: No shared files mode enabled, IPC is disabled 00:07:51.071 EAL: Heap on socket 0 was expanded by 66MB 00:07:51.071 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.071 EAL: request: mp_malloc_sync 00:07:51.071 EAL: No shared files mode enabled, IPC is disabled 00:07:51.071 EAL: Heap on socket 0 was shrunk by 66MB 00:07:51.330 EAL: Trying to obtain current memory policy. 00:07:51.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:51.330 EAL: Restoring previous memory policy: 4 00:07:51.330 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.330 EAL: request: mp_malloc_sync 00:07:51.330 EAL: No shared files mode enabled, IPC is disabled 00:07:51.330 EAL: Heap on socket 0 was expanded by 130MB 00:07:51.588 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.588 EAL: request: mp_malloc_sync 00:07:51.588 EAL: No shared files mode enabled, IPC is disabled 00:07:51.588 EAL: Heap on socket 0 was shrunk by 130MB 00:07:51.846 EAL: Trying to obtain current memory policy. 00:07:51.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:51.846 EAL: Restoring previous memory policy: 4 00:07:51.846 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.846 EAL: request: mp_malloc_sync 00:07:51.846 EAL: No shared files mode enabled, IPC is disabled 00:07:51.846 EAL: Heap on socket 0 was expanded by 258MB 00:07:52.413 EAL: Calling mem event callback 'spdk:(nil)' 00:07:52.413 EAL: request: mp_malloc_sync 00:07:52.413 EAL: No shared files mode enabled, IPC is disabled 00:07:52.413 EAL: Heap on socket 0 was shrunk by 258MB 00:07:53.073 EAL: Trying to obtain current memory policy. 00:07:53.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:53.073 EAL: Restoring previous memory policy: 4 00:07:53.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.073 EAL: request: mp_malloc_sync 00:07:53.073 EAL: No shared files mode enabled, IPC is disabled 00:07:53.073 EAL: Heap on socket 0 was expanded by 514MB 00:07:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.267 EAL: request: mp_malloc_sync 00:07:54.267 EAL: No shared files mode enabled, IPC is disabled 00:07:54.267 EAL: Heap on socket 0 was shrunk by 514MB 00:07:54.833 EAL: Trying to obtain current memory policy. 00:07:54.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.399 EAL: Restoring previous memory policy: 4 00:07:55.399 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.399 EAL: request: mp_malloc_sync 00:07:55.399 EAL: No shared files mode enabled, IPC is disabled 00:07:55.399 EAL: Heap on socket 0 was expanded by 1026MB 00:07:57.303 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.303 EAL: request: mp_malloc_sync 00:07:57.303 EAL: No shared files mode enabled, IPC is disabled 00:07:57.303 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:58.679 passed 00:07:58.679 00:07:58.679 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.679 suites 1 1 n/a 0 0 00:07:58.679 tests 2 2 2 0 0 00:07:58.679 asserts 5740 5740 5740 0 n/a 00:07:58.679 00:07:58.679 Elapsed time = 8.290 seconds 00:07:58.679 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.679 EAL: request: mp_malloc_sync 00:07:58.679 EAL: No shared files mode enabled, IPC is disabled 00:07:58.679 EAL: Heap on socket 0 was shrunk by 2MB 00:07:58.679 EAL: No shared files mode enabled, IPC is disabled 00:07:58.679 EAL: No shared files mode enabled, IPC is disabled 00:07:58.679 EAL: No shared files mode enabled, IPC is disabled 00:07:58.679 00:07:58.679 real 0m8.657s 00:07:58.679 user 0m7.093s 00:07:58.679 sys 0m1.376s 00:07:58.679 14:07:29 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.679 ************************************ 00:07:58.679 END TEST env_vtophys 00:07:58.679 ************************************ 00:07:58.679 14:07:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:58.679 14:07:29 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:58.679 14:07:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.679 14:07:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.679 14:07:29 env -- common/autotest_common.sh@10 -- # set +x 00:07:58.679 ************************************ 00:07:58.679 START TEST env_pci 00:07:58.679 ************************************ 00:07:58.679 14:07:29 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:58.679 00:07:58.679 00:07:58.679 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.679 http://cunit.sourceforge.net/ 00:07:58.679 00:07:58.679 00:07:58.679 Suite: pci 00:07:58.679 Test: pci_hook ...[2024-11-27 14:07:29.172967] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56838 has claimed it 00:07:58.937 passed 00:07:58.937 00:07:58.937 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.937 suites 1 1 n/a 0 0 00:07:58.937 tests 1 1 1 0 0 00:07:58.937 asserts 25 25 25 0 n/a 00:07:58.937 00:07:58.937 Elapsed time = 0.008 seconds 00:07:58.937 EAL: Cannot find device (10000:00:01.0) 00:07:58.937 EAL: Failed to attach device on primary process 00:07:58.937 00:07:58.937 real 0m0.093s 00:07:58.937 user 0m0.039s 00:07:58.937 sys 0m0.053s 00:07:58.937 14:07:29 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.937 14:07:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 ************************************ 00:07:58.937 END TEST env_pci 00:07:58.937 ************************************ 00:07:58.937 14:07:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:58.937 14:07:29 env -- env/env.sh@15 -- # uname 00:07:58.937 14:07:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:58.937 14:07:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:58.937 14:07:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:58.937 14:07:29 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.937 14:07:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.937 14:07:29 env -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 ************************************ 00:07:58.937 START TEST env_dpdk_post_init 00:07:58.937 ************************************ 00:07:58.937 14:07:29 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:58.937 EAL: Detected CPU lcores: 10 00:07:58.937 EAL: Detected NUMA nodes: 1 00:07:58.937 EAL: Detected shared linkage of DPDK 00:07:58.937 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:58.937 EAL: Selected IOVA mode 'PA' 00:07:59.196 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:59.196 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:59.196 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:59.196 Starting DPDK initialization... 00:07:59.196 Starting SPDK post initialization... 00:07:59.196 SPDK NVMe probe 00:07:59.196 Attaching to 0000:00:10.0 00:07:59.196 Attaching to 0000:00:11.0 00:07:59.196 Attached to 0000:00:10.0 00:07:59.196 Attached to 0000:00:11.0 00:07:59.196 Cleaning up... 00:07:59.196 00:07:59.196 real 0m0.294s 00:07:59.196 user 0m0.096s 00:07:59.196 sys 0m0.098s 00:07:59.196 ************************************ 00:07:59.196 END TEST env_dpdk_post_init 00:07:59.196 ************************************ 00:07:59.196 14:07:29 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.196 14:07:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:59.196 14:07:29 env -- env/env.sh@26 -- # uname 00:07:59.196 14:07:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:59.196 14:07:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:59.196 14:07:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.196 14:07:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.196 14:07:29 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.196 ************************************ 00:07:59.196 START TEST env_mem_callbacks 00:07:59.196 ************************************ 00:07:59.196 14:07:29 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:59.196 EAL: Detected CPU lcores: 10 00:07:59.196 EAL: Detected NUMA nodes: 1 00:07:59.196 EAL: Detected shared linkage of DPDK 00:07:59.196 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:59.196 EAL: Selected IOVA mode 'PA' 00:07:59.456 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:59.456 00:07:59.456 00:07:59.456 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.456 http://cunit.sourceforge.net/ 00:07:59.456 00:07:59.456 00:07:59.456 Suite: memory 00:07:59.456 Test: test ... 00:07:59.456 register 0x200000200000 2097152 00:07:59.456 malloc 3145728 00:07:59.456 register 0x200000400000 4194304 00:07:59.456 buf 0x2000004fffc0 len 3145728 PASSED 00:07:59.456 malloc 64 00:07:59.456 buf 0x2000004ffec0 len 64 PASSED 00:07:59.456 malloc 4194304 00:07:59.456 register 0x200000800000 6291456 00:07:59.456 buf 0x2000009fffc0 len 4194304 PASSED 00:07:59.456 free 0x2000004fffc0 3145728 00:07:59.456 free 0x2000004ffec0 64 00:07:59.456 unregister 0x200000400000 4194304 PASSED 00:07:59.456 free 0x2000009fffc0 4194304 00:07:59.456 unregister 0x200000800000 6291456 PASSED 00:07:59.456 malloc 8388608 00:07:59.456 register 0x200000400000 10485760 00:07:59.456 buf 0x2000005fffc0 len 8388608 PASSED 00:07:59.456 free 0x2000005fffc0 8388608 00:07:59.456 unregister 0x200000400000 10485760 PASSED 00:07:59.456 passed 00:07:59.456 00:07:59.456 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.456 suites 1 1 n/a 0 0 00:07:59.456 tests 1 1 1 0 0 00:07:59.456 asserts 15 15 15 0 n/a 00:07:59.456 00:07:59.456 Elapsed time = 0.067 seconds 00:07:59.456 00:07:59.456 real 0m0.275s 00:07:59.456 user 0m0.096s 00:07:59.456 sys 0m0.076s 00:07:59.456 ************************************ 00:07:59.456 END TEST env_mem_callbacks 00:07:59.456 ************************************ 00:07:59.456 14:07:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.456 14:07:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:59.456 00:07:59.456 real 0m10.153s 00:07:59.456 user 0m7.881s 00:07:59.456 sys 0m1.867s 00:07:59.456 14:07:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.456 ************************************ 00:07:59.456 END TEST env 00:07:59.456 ************************************ 00:07:59.456 14:07:29 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.715 14:07:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:59.715 14:07:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.715 14:07:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.715 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:07:59.715 ************************************ 00:07:59.715 START TEST rpc 00:07:59.715 ************************************ 00:07:59.715 14:07:29 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:59.715 * Looking for test storage... 00:07:59.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:59.715 14:07:30 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.715 14:07:30 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.715 14:07:30 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.715 14:07:30 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.715 14:07:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.715 14:07:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.715 14:07:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.715 14:07:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.715 14:07:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.715 14:07:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:59.715 14:07:30 rpc -- scripts/common.sh@345 -- # : 1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.715 14:07:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.715 14:07:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@353 -- # local d=1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.715 14:07:30 rpc -- scripts/common.sh@355 -- # echo 1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.715 14:07:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@353 -- # local d=2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.715 14:07:30 rpc -- scripts/common.sh@355 -- # echo 2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.715 14:07:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.715 14:07:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.716 14:07:30 rpc -- scripts/common.sh@368 -- # return 0 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.716 --rc genhtml_branch_coverage=1 00:07:59.716 --rc genhtml_function_coverage=1 00:07:59.716 --rc genhtml_legend=1 00:07:59.716 --rc geninfo_all_blocks=1 00:07:59.716 --rc geninfo_unexecuted_blocks=1 00:07:59.716 00:07:59.716 ' 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.716 --rc genhtml_branch_coverage=1 00:07:59.716 --rc genhtml_function_coverage=1 00:07:59.716 --rc genhtml_legend=1 00:07:59.716 --rc geninfo_all_blocks=1 00:07:59.716 --rc geninfo_unexecuted_blocks=1 00:07:59.716 00:07:59.716 ' 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.716 --rc genhtml_branch_coverage=1 00:07:59.716 --rc genhtml_function_coverage=1 00:07:59.716 --rc genhtml_legend=1 00:07:59.716 --rc geninfo_all_blocks=1 00:07:59.716 --rc geninfo_unexecuted_blocks=1 00:07:59.716 00:07:59.716 ' 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.716 --rc genhtml_branch_coverage=1 00:07:59.716 --rc genhtml_function_coverage=1 00:07:59.716 --rc genhtml_legend=1 00:07:59.716 --rc geninfo_all_blocks=1 00:07:59.716 --rc geninfo_unexecuted_blocks=1 00:07:59.716 00:07:59.716 ' 00:07:59.716 14:07:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56965 00:07:59.716 14:07:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.716 14:07:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:59.716 14:07:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56965 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@835 -- # '[' -z 56965 ']' 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.716 14:07:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.974 [2024-11-27 14:07:30.299077] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:07:59.974 [2024-11-27 14:07:30.299265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56965 ] 00:08:00.232 [2024-11-27 14:07:30.490733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.232 [2024-11-27 14:07:30.646504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:00.232 [2024-11-27 14:07:30.646612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56965' to capture a snapshot of events at runtime. 00:08:00.232 [2024-11-27 14:07:30.646633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.232 [2024-11-27 14:07:30.646661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.232 [2024-11-27 14:07:30.646675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56965 for offline analysis/debug. 00:08:00.232 [2024-11-27 14:07:30.648306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.167 14:07:31 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.167 14:07:31 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:01.167 14:07:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:01.167 14:07:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:01.167 14:07:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:01.167 14:07:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:01.167 14:07:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.167 14:07:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.167 14:07:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.167 ************************************ 00:08:01.167 START TEST rpc_integrity 00:08:01.167 ************************************ 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.167 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.167 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:01.167 { 00:08:01.167 "name": "Malloc0", 00:08:01.167 "aliases": [ 00:08:01.167 "308111a3-ef51-4abf-9d92-4e57cd637b42" 00:08:01.167 ], 00:08:01.167 "product_name": "Malloc disk", 00:08:01.167 "block_size": 512, 00:08:01.167 "num_blocks": 16384, 00:08:01.167 "uuid": "308111a3-ef51-4abf-9d92-4e57cd637b42", 00:08:01.167 "assigned_rate_limits": { 00:08:01.167 "rw_ios_per_sec": 0, 00:08:01.167 "rw_mbytes_per_sec": 0, 00:08:01.167 "r_mbytes_per_sec": 0, 00:08:01.167 "w_mbytes_per_sec": 0 00:08:01.167 }, 00:08:01.167 "claimed": false, 00:08:01.167 "zoned": false, 00:08:01.167 "supported_io_types": { 00:08:01.167 "read": true, 00:08:01.167 "write": true, 00:08:01.167 "unmap": true, 00:08:01.167 "flush": true, 00:08:01.167 "reset": true, 00:08:01.167 "nvme_admin": false, 00:08:01.167 "nvme_io": false, 00:08:01.167 "nvme_io_md": false, 00:08:01.167 "write_zeroes": true, 00:08:01.167 "zcopy": true, 00:08:01.167 "get_zone_info": false, 00:08:01.167 "zone_management": false, 00:08:01.167 "zone_append": false, 00:08:01.167 "compare": false, 00:08:01.167 "compare_and_write": false, 00:08:01.167 "abort": true, 00:08:01.167 "seek_hole": false, 00:08:01.167 "seek_data": false, 00:08:01.167 "copy": true, 00:08:01.168 "nvme_iov_md": false 00:08:01.168 }, 00:08:01.168 "memory_domains": [ 00:08:01.168 { 00:08:01.168 "dma_device_id": "system", 00:08:01.168 "dma_device_type": 1 00:08:01.168 }, 00:08:01.168 { 00:08:01.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.168 "dma_device_type": 2 00:08:01.168 } 00:08:01.168 ], 00:08:01.168 "driver_specific": {} 00:08:01.168 } 00:08:01.168 ]' 00:08:01.168 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:01.426 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:01.426 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:01.426 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.426 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.426 [2024-11-27 14:07:31.718724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:01.426 [2024-11-27 14:07:31.718990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.426 [2024-11-27 14:07:31.719045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:01.426 [2024-11-27 14:07:31.719079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.426 [2024-11-27 14:07:31.722193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.426 [2024-11-27 14:07:31.722370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:01.426 Passthru0 00:08:01.426 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.426 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:01.426 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.426 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.426 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.426 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:01.426 { 00:08:01.426 "name": "Malloc0", 00:08:01.426 "aliases": [ 00:08:01.426 "308111a3-ef51-4abf-9d92-4e57cd637b42" 00:08:01.426 ], 00:08:01.426 "product_name": "Malloc disk", 00:08:01.426 "block_size": 512, 00:08:01.426 "num_blocks": 16384, 00:08:01.426 "uuid": "308111a3-ef51-4abf-9d92-4e57cd637b42", 00:08:01.426 "assigned_rate_limits": { 00:08:01.426 "rw_ios_per_sec": 0, 00:08:01.426 "rw_mbytes_per_sec": 0, 00:08:01.426 "r_mbytes_per_sec": 0, 00:08:01.426 "w_mbytes_per_sec": 0 00:08:01.426 }, 00:08:01.426 "claimed": true, 00:08:01.426 "claim_type": "exclusive_write", 00:08:01.426 "zoned": false, 00:08:01.426 "supported_io_types": { 00:08:01.426 "read": true, 00:08:01.426 "write": true, 00:08:01.426 "unmap": true, 00:08:01.426 "flush": true, 00:08:01.426 "reset": true, 00:08:01.426 "nvme_admin": false, 00:08:01.426 "nvme_io": false, 00:08:01.426 "nvme_io_md": false, 00:08:01.426 "write_zeroes": true, 00:08:01.426 "zcopy": true, 00:08:01.426 "get_zone_info": false, 00:08:01.426 "zone_management": false, 00:08:01.426 "zone_append": false, 00:08:01.426 "compare": false, 00:08:01.426 "compare_and_write": false, 00:08:01.426 "abort": true, 00:08:01.426 "seek_hole": false, 00:08:01.426 "seek_data": false, 00:08:01.426 "copy": true, 00:08:01.426 "nvme_iov_md": false 00:08:01.426 }, 00:08:01.426 "memory_domains": [ 00:08:01.426 { 00:08:01.426 "dma_device_id": "system", 00:08:01.426 "dma_device_type": 1 00:08:01.426 }, 00:08:01.426 { 00:08:01.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.426 "dma_device_type": 2 00:08:01.426 } 00:08:01.426 ], 00:08:01.426 "driver_specific": {} 00:08:01.426 }, 00:08:01.426 { 00:08:01.426 "name": "Passthru0", 00:08:01.426 "aliases": [ 00:08:01.426 "dd653be9-aa18-5893-98e5-24d915fd1902" 00:08:01.426 ], 00:08:01.426 "product_name": "passthru", 00:08:01.426 "block_size": 512, 00:08:01.426 "num_blocks": 16384, 00:08:01.426 "uuid": "dd653be9-aa18-5893-98e5-24d915fd1902", 00:08:01.426 "assigned_rate_limits": { 00:08:01.426 "rw_ios_per_sec": 0, 00:08:01.426 "rw_mbytes_per_sec": 0, 00:08:01.426 "r_mbytes_per_sec": 0, 00:08:01.426 "w_mbytes_per_sec": 0 00:08:01.426 }, 00:08:01.426 "claimed": false, 00:08:01.426 "zoned": false, 00:08:01.426 "supported_io_types": { 00:08:01.426 "read": true, 00:08:01.426 "write": true, 00:08:01.426 "unmap": true, 00:08:01.426 "flush": true, 00:08:01.426 "reset": true, 00:08:01.426 "nvme_admin": false, 00:08:01.426 "nvme_io": false, 00:08:01.426 "nvme_io_md": false, 00:08:01.426 "write_zeroes": true, 00:08:01.426 "zcopy": true, 00:08:01.426 "get_zone_info": false, 00:08:01.426 "zone_management": false, 00:08:01.426 "zone_append": false, 00:08:01.426 "compare": false, 00:08:01.426 "compare_and_write": false, 00:08:01.426 "abort": true, 00:08:01.426 "seek_hole": false, 00:08:01.426 "seek_data": false, 00:08:01.426 "copy": true, 00:08:01.426 "nvme_iov_md": false 00:08:01.426 }, 00:08:01.426 "memory_domains": [ 00:08:01.426 { 00:08:01.427 "dma_device_id": "system", 00:08:01.427 "dma_device_type": 1 00:08:01.427 }, 00:08:01.427 { 00:08:01.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.427 "dma_device_type": 2 00:08:01.427 } 00:08:01.427 ], 00:08:01.427 "driver_specific": { 00:08:01.427 "passthru": { 00:08:01.427 "name": "Passthru0", 00:08:01.427 "base_bdev_name": "Malloc0" 00:08:01.427 } 00:08:01.427 } 00:08:01.427 } 00:08:01.427 ]' 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:01.427 ************************************ 00:08:01.427 END TEST rpc_integrity 00:08:01.427 ************************************ 00:08:01.427 14:07:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:01.427 00:08:01.427 real 0m0.359s 00:08:01.427 user 0m0.218s 00:08:01.427 sys 0m0.042s 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.427 14:07:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:01.685 14:07:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.685 14:07:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.685 14:07:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 ************************************ 00:08:01.685 START TEST rpc_plugins 00:08:01.685 ************************************ 00:08:01.685 14:07:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:01.685 14:07:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:01.685 14:07:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 14:07:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 14:07:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:01.685 14:07:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:01.685 14:07:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 14:07:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:01.685 { 00:08:01.685 "name": "Malloc1", 00:08:01.685 "aliases": [ 00:08:01.685 "df8b6b57-9db3-4154-999b-a2040f609ac7" 00:08:01.685 ], 00:08:01.685 "product_name": "Malloc disk", 00:08:01.685 "block_size": 4096, 00:08:01.685 "num_blocks": 256, 00:08:01.685 "uuid": "df8b6b57-9db3-4154-999b-a2040f609ac7", 00:08:01.685 "assigned_rate_limits": { 00:08:01.685 "rw_ios_per_sec": 0, 00:08:01.685 "rw_mbytes_per_sec": 0, 00:08:01.685 "r_mbytes_per_sec": 0, 00:08:01.685 "w_mbytes_per_sec": 0 00:08:01.685 }, 00:08:01.685 "claimed": false, 00:08:01.685 "zoned": false, 00:08:01.685 "supported_io_types": { 00:08:01.685 "read": true, 00:08:01.685 "write": true, 00:08:01.685 "unmap": true, 00:08:01.685 "flush": true, 00:08:01.685 "reset": true, 00:08:01.685 "nvme_admin": false, 00:08:01.685 "nvme_io": false, 00:08:01.685 "nvme_io_md": false, 00:08:01.685 "write_zeroes": true, 00:08:01.685 "zcopy": true, 00:08:01.685 "get_zone_info": false, 00:08:01.685 "zone_management": false, 00:08:01.685 "zone_append": false, 00:08:01.685 "compare": false, 00:08:01.685 "compare_and_write": false, 00:08:01.685 "abort": true, 00:08:01.685 "seek_hole": false, 00:08:01.685 "seek_data": false, 00:08:01.685 "copy": true, 00:08:01.685 "nvme_iov_md": false 00:08:01.685 }, 00:08:01.685 "memory_domains": [ 00:08:01.685 { 00:08:01.685 "dma_device_id": "system", 00:08:01.685 "dma_device_type": 1 00:08:01.685 }, 00:08:01.685 { 00:08:01.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.685 "dma_device_type": 2 00:08:01.685 } 00:08:01.685 ], 00:08:01.685 "driver_specific": {} 00:08:01.685 } 00:08:01.685 ]' 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:01.685 ************************************ 00:08:01.685 END TEST rpc_plugins 00:08:01.685 ************************************ 00:08:01.685 14:07:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:01.685 00:08:01.685 real 0m0.155s 00:08:01.685 user 0m0.090s 00:08:01.685 sys 0m0.022s 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.685 14:07:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:01.685 14:07:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.685 14:07:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.685 14:07:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 ************************************ 00:08:01.685 START TEST rpc_trace_cmd_test 00:08:01.685 ************************************ 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:01.685 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56965", 00:08:01.685 "tpoint_group_mask": "0x8", 00:08:01.685 "iscsi_conn": { 00:08:01.685 "mask": "0x2", 00:08:01.685 "tpoint_mask": "0x0" 00:08:01.685 }, 00:08:01.685 "scsi": { 00:08:01.685 "mask": "0x4", 00:08:01.685 "tpoint_mask": "0x0" 00:08:01.685 }, 00:08:01.685 "bdev": { 00:08:01.685 "mask": "0x8", 00:08:01.685 "tpoint_mask": "0xffffffffffffffff" 00:08:01.685 }, 00:08:01.685 "nvmf_rdma": { 00:08:01.685 "mask": "0x10", 00:08:01.685 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "nvmf_tcp": { 00:08:01.686 "mask": "0x20", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "ftl": { 00:08:01.686 "mask": "0x40", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "blobfs": { 00:08:01.686 "mask": "0x80", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "dsa": { 00:08:01.686 "mask": "0x200", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "thread": { 00:08:01.686 "mask": "0x400", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "nvme_pcie": { 00:08:01.686 "mask": "0x800", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "iaa": { 00:08:01.686 "mask": "0x1000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "nvme_tcp": { 00:08:01.686 "mask": "0x2000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "bdev_nvme": { 00:08:01.686 "mask": "0x4000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "sock": { 00:08:01.686 "mask": "0x8000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "blob": { 00:08:01.686 "mask": "0x10000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "bdev_raid": { 00:08:01.686 "mask": "0x20000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 }, 00:08:01.686 "scheduler": { 00:08:01.686 "mask": "0x40000", 00:08:01.686 "tpoint_mask": "0x0" 00:08:01.686 } 00:08:01.686 }' 00:08:01.686 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:01.944 ************************************ 00:08:01.944 END TEST rpc_trace_cmd_test 00:08:01.944 ************************************ 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:01.944 00:08:01.944 real 0m0.288s 00:08:01.944 user 0m0.251s 00:08:01.944 sys 0m0.029s 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.944 14:07:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 14:07:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:02.203 14:07:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:02.203 14:07:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:02.203 14:07:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.203 14:07:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.203 14:07:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 ************************************ 00:08:02.203 START TEST rpc_daemon_integrity 00:08:02.203 ************************************ 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:02.203 { 00:08:02.203 "name": "Malloc2", 00:08:02.203 "aliases": [ 00:08:02.203 "f7cd63b6-cab6-4ef0-87d6-0436b4bcc71c" 00:08:02.203 ], 00:08:02.203 "product_name": "Malloc disk", 00:08:02.203 "block_size": 512, 00:08:02.203 "num_blocks": 16384, 00:08:02.203 "uuid": "f7cd63b6-cab6-4ef0-87d6-0436b4bcc71c", 00:08:02.203 "assigned_rate_limits": { 00:08:02.203 "rw_ios_per_sec": 0, 00:08:02.203 "rw_mbytes_per_sec": 0, 00:08:02.203 "r_mbytes_per_sec": 0, 00:08:02.203 "w_mbytes_per_sec": 0 00:08:02.203 }, 00:08:02.203 "claimed": false, 00:08:02.203 "zoned": false, 00:08:02.203 "supported_io_types": { 00:08:02.203 "read": true, 00:08:02.203 "write": true, 00:08:02.203 "unmap": true, 00:08:02.203 "flush": true, 00:08:02.203 "reset": true, 00:08:02.203 "nvme_admin": false, 00:08:02.203 "nvme_io": false, 00:08:02.203 "nvme_io_md": false, 00:08:02.203 "write_zeroes": true, 00:08:02.203 "zcopy": true, 00:08:02.203 "get_zone_info": false, 00:08:02.203 "zone_management": false, 00:08:02.203 "zone_append": false, 00:08:02.203 "compare": false, 00:08:02.203 "compare_and_write": false, 00:08:02.203 "abort": true, 00:08:02.203 "seek_hole": false, 00:08:02.203 "seek_data": false, 00:08:02.203 "copy": true, 00:08:02.203 "nvme_iov_md": false 00:08:02.203 }, 00:08:02.203 "memory_domains": [ 00:08:02.203 { 00:08:02.203 "dma_device_id": "system", 00:08:02.203 "dma_device_type": 1 00:08:02.203 }, 00:08:02.203 { 00:08:02.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.203 "dma_device_type": 2 00:08:02.203 } 00:08:02.203 ], 00:08:02.203 "driver_specific": {} 00:08:02.203 } 00:08:02.203 ]' 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 [2024-11-27 14:07:32.654788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:02.203 [2024-11-27 14:07:32.654888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.203 [2024-11-27 14:07:32.654923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:02.203 [2024-11-27 14:07:32.654941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.203 [2024-11-27 14:07:32.657997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.203 [2024-11-27 14:07:32.658047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:02.203 Passthru0 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.203 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:02.203 { 00:08:02.203 "name": "Malloc2", 00:08:02.203 "aliases": [ 00:08:02.203 "f7cd63b6-cab6-4ef0-87d6-0436b4bcc71c" 00:08:02.203 ], 00:08:02.203 "product_name": "Malloc disk", 00:08:02.203 "block_size": 512, 00:08:02.203 "num_blocks": 16384, 00:08:02.203 "uuid": "f7cd63b6-cab6-4ef0-87d6-0436b4bcc71c", 00:08:02.203 "assigned_rate_limits": { 00:08:02.203 "rw_ios_per_sec": 0, 00:08:02.203 "rw_mbytes_per_sec": 0, 00:08:02.203 "r_mbytes_per_sec": 0, 00:08:02.203 "w_mbytes_per_sec": 0 00:08:02.203 }, 00:08:02.203 "claimed": true, 00:08:02.203 "claim_type": "exclusive_write", 00:08:02.203 "zoned": false, 00:08:02.203 "supported_io_types": { 00:08:02.203 "read": true, 00:08:02.203 "write": true, 00:08:02.203 "unmap": true, 00:08:02.203 "flush": true, 00:08:02.203 "reset": true, 00:08:02.203 "nvme_admin": false, 00:08:02.203 "nvme_io": false, 00:08:02.203 "nvme_io_md": false, 00:08:02.203 "write_zeroes": true, 00:08:02.203 "zcopy": true, 00:08:02.203 "get_zone_info": false, 00:08:02.203 "zone_management": false, 00:08:02.203 "zone_append": false, 00:08:02.203 "compare": false, 00:08:02.203 "compare_and_write": false, 00:08:02.203 "abort": true, 00:08:02.203 "seek_hole": false, 00:08:02.203 "seek_data": false, 00:08:02.203 "copy": true, 00:08:02.203 "nvme_iov_md": false 00:08:02.203 }, 00:08:02.203 "memory_domains": [ 00:08:02.203 { 00:08:02.203 "dma_device_id": "system", 00:08:02.203 "dma_device_type": 1 00:08:02.203 }, 00:08:02.203 { 00:08:02.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.203 "dma_device_type": 2 00:08:02.203 } 00:08:02.203 ], 00:08:02.203 "driver_specific": {} 00:08:02.203 }, 00:08:02.203 { 00:08:02.203 "name": "Passthru0", 00:08:02.203 "aliases": [ 00:08:02.203 "0fb52feb-6d7c-5470-9d8b-fd5d9d3681db" 00:08:02.203 ], 00:08:02.203 "product_name": "passthru", 00:08:02.203 "block_size": 512, 00:08:02.203 "num_blocks": 16384, 00:08:02.203 "uuid": "0fb52feb-6d7c-5470-9d8b-fd5d9d3681db", 00:08:02.203 "assigned_rate_limits": { 00:08:02.203 "rw_ios_per_sec": 0, 00:08:02.203 "rw_mbytes_per_sec": 0, 00:08:02.204 "r_mbytes_per_sec": 0, 00:08:02.204 "w_mbytes_per_sec": 0 00:08:02.204 }, 00:08:02.204 "claimed": false, 00:08:02.204 "zoned": false, 00:08:02.204 "supported_io_types": { 00:08:02.204 "read": true, 00:08:02.204 "write": true, 00:08:02.204 "unmap": true, 00:08:02.204 "flush": true, 00:08:02.204 "reset": true, 00:08:02.204 "nvme_admin": false, 00:08:02.204 "nvme_io": false, 00:08:02.204 "nvme_io_md": false, 00:08:02.204 "write_zeroes": true, 00:08:02.204 "zcopy": true, 00:08:02.204 "get_zone_info": false, 00:08:02.204 "zone_management": false, 00:08:02.204 "zone_append": false, 00:08:02.204 "compare": false, 00:08:02.204 "compare_and_write": false, 00:08:02.204 "abort": true, 00:08:02.204 "seek_hole": false, 00:08:02.204 "seek_data": false, 00:08:02.204 "copy": true, 00:08:02.204 "nvme_iov_md": false 00:08:02.204 }, 00:08:02.204 "memory_domains": [ 00:08:02.204 { 00:08:02.204 "dma_device_id": "system", 00:08:02.204 "dma_device_type": 1 00:08:02.204 }, 00:08:02.204 { 00:08:02.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.204 "dma_device_type": 2 00:08:02.204 } 00:08:02.204 ], 00:08:02.204 "driver_specific": { 00:08:02.204 "passthru": { 00:08:02.204 "name": "Passthru0", 00:08:02.204 "base_bdev_name": "Malloc2" 00:08:02.204 } 00:08:02.204 } 00:08:02.204 } 00:08:02.204 ]' 00:08:02.204 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:02.463 00:08:02.463 real 0m0.345s 00:08:02.463 user 0m0.212s 00:08:02.463 sys 0m0.039s 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.463 14:07:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 ************************************ 00:08:02.463 END TEST rpc_daemon_integrity 00:08:02.463 ************************************ 00:08:02.463 14:07:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:02.463 14:07:32 rpc -- rpc/rpc.sh@84 -- # killprocess 56965 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 56965 ']' 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@958 -- # kill -0 56965 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@959 -- # uname 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56965 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.463 killing process with pid 56965 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56965' 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@973 -- # kill 56965 00:08:02.463 14:07:32 rpc -- common/autotest_common.sh@978 -- # wait 56965 00:08:04.987 00:08:04.987 real 0m5.142s 00:08:04.987 user 0m5.891s 00:08:04.987 sys 0m0.886s 00:08:04.987 14:07:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.987 14:07:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 ************************************ 00:08:04.987 END TEST rpc 00:08:04.987 ************************************ 00:08:04.987 14:07:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:04.987 14:07:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.987 14:07:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.987 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 ************************************ 00:08:04.987 START TEST skip_rpc 00:08:04.987 ************************************ 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:04.987 * Looking for test storage... 00:08:04.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.987 14:07:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:04.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.987 --rc genhtml_branch_coverage=1 00:08:04.987 --rc genhtml_function_coverage=1 00:08:04.987 --rc genhtml_legend=1 00:08:04.987 --rc geninfo_all_blocks=1 00:08:04.987 --rc geninfo_unexecuted_blocks=1 00:08:04.987 00:08:04.987 ' 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:04.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.987 --rc genhtml_branch_coverage=1 00:08:04.987 --rc genhtml_function_coverage=1 00:08:04.987 --rc genhtml_legend=1 00:08:04.987 --rc geninfo_all_blocks=1 00:08:04.987 --rc geninfo_unexecuted_blocks=1 00:08:04.987 00:08:04.987 ' 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:04.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.987 --rc genhtml_branch_coverage=1 00:08:04.987 --rc genhtml_function_coverage=1 00:08:04.987 --rc genhtml_legend=1 00:08:04.987 --rc geninfo_all_blocks=1 00:08:04.987 --rc geninfo_unexecuted_blocks=1 00:08:04.987 00:08:04.987 ' 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:04.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.987 --rc genhtml_branch_coverage=1 00:08:04.987 --rc genhtml_function_coverage=1 00:08:04.987 --rc genhtml_legend=1 00:08:04.987 --rc geninfo_all_blocks=1 00:08:04.987 --rc geninfo_unexecuted_blocks=1 00:08:04.987 00:08:04.987 ' 00:08:04.987 14:07:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:04.987 14:07:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:04.987 14:07:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.987 14:07:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 ************************************ 00:08:04.987 START TEST skip_rpc 00:08:04.987 ************************************ 00:08:04.987 14:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:04.987 14:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57194 00:08:04.987 14:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:04.987 14:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:04.987 14:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:04.987 [2024-11-27 14:07:35.488314] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:04.987 [2024-11-27 14:07:35.488494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57194 ] 00:08:05.245 [2024-11-27 14:07:35.670737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.507 [2024-11-27 14:07:35.833236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57194 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57194 ']' 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57194 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57194 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.787 killing process with pid 57194 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57194' 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57194 00:08:10.787 14:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57194 00:08:12.174 00:08:12.174 real 0m7.294s 00:08:12.174 user 0m6.721s 00:08:12.174 sys 0m0.470s 00:08:12.174 14:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.174 ************************************ 00:08:12.174 END TEST skip_rpc 00:08:12.174 ************************************ 00:08:12.174 14:07:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.432 14:07:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:12.432 14:07:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.432 14:07:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.432 14:07:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.432 ************************************ 00:08:12.432 START TEST skip_rpc_with_json 00:08:12.433 ************************************ 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57298 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57298 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57298 ']' 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.433 14:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:12.433 [2024-11-27 14:07:42.844383] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:12.433 [2024-11-27 14:07:42.845204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57298 ] 00:08:12.691 [2024-11-27 14:07:43.040601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.691 [2024-11-27 14:07:43.173911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:13.628 [2024-11-27 14:07:44.055879] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:13.628 request: 00:08:13.628 { 00:08:13.628 "trtype": "tcp", 00:08:13.628 "method": "nvmf_get_transports", 00:08:13.628 "req_id": 1 00:08:13.628 } 00:08:13.628 Got JSON-RPC error response 00:08:13.628 response: 00:08:13.628 { 00:08:13.628 "code": -19, 00:08:13.628 "message": "No such device" 00:08:13.628 } 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:13.628 [2024-11-27 14:07:44.064022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.628 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:13.887 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.887 14:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:13.887 { 00:08:13.887 "subsystems": [ 00:08:13.887 { 00:08:13.887 "subsystem": "fsdev", 00:08:13.887 "config": [ 00:08:13.887 { 00:08:13.887 "method": "fsdev_set_opts", 00:08:13.887 "params": { 00:08:13.887 "fsdev_io_pool_size": 65535, 00:08:13.887 "fsdev_io_cache_size": 256 00:08:13.887 } 00:08:13.887 } 00:08:13.887 ] 00:08:13.887 }, 00:08:13.887 { 00:08:13.887 "subsystem": "keyring", 00:08:13.887 "config": [] 00:08:13.887 }, 00:08:13.887 { 00:08:13.887 "subsystem": "iobuf", 00:08:13.887 "config": [ 00:08:13.887 { 00:08:13.887 "method": "iobuf_set_options", 00:08:13.887 "params": { 00:08:13.887 "small_pool_count": 8192, 00:08:13.887 "large_pool_count": 1024, 00:08:13.887 "small_bufsize": 8192, 00:08:13.887 "large_bufsize": 135168, 00:08:13.887 "enable_numa": false 00:08:13.887 } 00:08:13.887 } 00:08:13.887 ] 00:08:13.887 }, 00:08:13.887 { 00:08:13.887 "subsystem": "sock", 00:08:13.887 "config": [ 00:08:13.887 { 00:08:13.887 "method": "sock_set_default_impl", 00:08:13.887 "params": { 00:08:13.887 "impl_name": "posix" 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "sock_impl_set_options", 00:08:13.888 "params": { 00:08:13.888 "impl_name": "ssl", 00:08:13.888 "recv_buf_size": 4096, 00:08:13.888 "send_buf_size": 4096, 00:08:13.888 "enable_recv_pipe": true, 00:08:13.888 "enable_quickack": false, 00:08:13.888 "enable_placement_id": 0, 00:08:13.888 "enable_zerocopy_send_server": true, 00:08:13.888 "enable_zerocopy_send_client": false, 00:08:13.888 "zerocopy_threshold": 0, 00:08:13.888 "tls_version": 0, 00:08:13.888 "enable_ktls": false 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "sock_impl_set_options", 00:08:13.888 "params": { 00:08:13.888 "impl_name": "posix", 00:08:13.888 "recv_buf_size": 2097152, 00:08:13.888 "send_buf_size": 2097152, 00:08:13.888 "enable_recv_pipe": true, 00:08:13.888 "enable_quickack": false, 00:08:13.888 "enable_placement_id": 0, 00:08:13.888 "enable_zerocopy_send_server": true, 00:08:13.888 "enable_zerocopy_send_client": false, 00:08:13.888 "zerocopy_threshold": 0, 00:08:13.888 "tls_version": 0, 00:08:13.888 "enable_ktls": false 00:08:13.888 } 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "vmd", 00:08:13.888 "config": [] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "accel", 00:08:13.888 "config": [ 00:08:13.888 { 00:08:13.888 "method": "accel_set_options", 00:08:13.888 "params": { 00:08:13.888 "small_cache_size": 128, 00:08:13.888 "large_cache_size": 16, 00:08:13.888 "task_count": 2048, 00:08:13.888 "sequence_count": 2048, 00:08:13.888 "buf_count": 2048 00:08:13.888 } 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "bdev", 00:08:13.888 "config": [ 00:08:13.888 { 00:08:13.888 "method": "bdev_set_options", 00:08:13.888 "params": { 00:08:13.888 "bdev_io_pool_size": 65535, 00:08:13.888 "bdev_io_cache_size": 256, 00:08:13.888 "bdev_auto_examine": true, 00:08:13.888 "iobuf_small_cache_size": 128, 00:08:13.888 "iobuf_large_cache_size": 16 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "bdev_raid_set_options", 00:08:13.888 "params": { 00:08:13.888 "process_window_size_kb": 1024, 00:08:13.888 "process_max_bandwidth_mb_sec": 0 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "bdev_iscsi_set_options", 00:08:13.888 "params": { 00:08:13.888 "timeout_sec": 30 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "bdev_nvme_set_options", 00:08:13.888 "params": { 00:08:13.888 "action_on_timeout": "none", 00:08:13.888 "timeout_us": 0, 00:08:13.888 "timeout_admin_us": 0, 00:08:13.888 "keep_alive_timeout_ms": 10000, 00:08:13.888 "arbitration_burst": 0, 00:08:13.888 "low_priority_weight": 0, 00:08:13.888 "medium_priority_weight": 0, 00:08:13.888 "high_priority_weight": 0, 00:08:13.888 "nvme_adminq_poll_period_us": 10000, 00:08:13.888 "nvme_ioq_poll_period_us": 0, 00:08:13.888 "io_queue_requests": 0, 00:08:13.888 "delay_cmd_submit": true, 00:08:13.888 "transport_retry_count": 4, 00:08:13.888 "bdev_retry_count": 3, 00:08:13.888 "transport_ack_timeout": 0, 00:08:13.888 "ctrlr_loss_timeout_sec": 0, 00:08:13.888 "reconnect_delay_sec": 0, 00:08:13.888 "fast_io_fail_timeout_sec": 0, 00:08:13.888 "disable_auto_failback": false, 00:08:13.888 "generate_uuids": false, 00:08:13.888 "transport_tos": 0, 00:08:13.888 "nvme_error_stat": false, 00:08:13.888 "rdma_srq_size": 0, 00:08:13.888 "io_path_stat": false, 00:08:13.888 "allow_accel_sequence": false, 00:08:13.888 "rdma_max_cq_size": 0, 00:08:13.888 "rdma_cm_event_timeout_ms": 0, 00:08:13.888 "dhchap_digests": [ 00:08:13.888 "sha256", 00:08:13.888 "sha384", 00:08:13.888 "sha512" 00:08:13.888 ], 00:08:13.888 "dhchap_dhgroups": [ 00:08:13.888 "null", 00:08:13.888 "ffdhe2048", 00:08:13.888 "ffdhe3072", 00:08:13.888 "ffdhe4096", 00:08:13.888 "ffdhe6144", 00:08:13.888 "ffdhe8192" 00:08:13.888 ] 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "bdev_nvme_set_hotplug", 00:08:13.888 "params": { 00:08:13.888 "period_us": 100000, 00:08:13.888 "enable": false 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "bdev_wait_for_examine" 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "scsi", 00:08:13.888 "config": null 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "scheduler", 00:08:13.888 "config": [ 00:08:13.888 { 00:08:13.888 "method": "framework_set_scheduler", 00:08:13.888 "params": { 00:08:13.888 "name": "static" 00:08:13.888 } 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "vhost_scsi", 00:08:13.888 "config": [] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "vhost_blk", 00:08:13.888 "config": [] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "ublk", 00:08:13.888 "config": [] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "nbd", 00:08:13.888 "config": [] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "nvmf", 00:08:13.888 "config": [ 00:08:13.888 { 00:08:13.888 "method": "nvmf_set_config", 00:08:13.888 "params": { 00:08:13.888 "discovery_filter": "match_any", 00:08:13.888 "admin_cmd_passthru": { 00:08:13.888 "identify_ctrlr": false 00:08:13.888 }, 00:08:13.888 "dhchap_digests": [ 00:08:13.888 "sha256", 00:08:13.888 "sha384", 00:08:13.888 "sha512" 00:08:13.888 ], 00:08:13.888 "dhchap_dhgroups": [ 00:08:13.888 "null", 00:08:13.888 "ffdhe2048", 00:08:13.888 "ffdhe3072", 00:08:13.888 "ffdhe4096", 00:08:13.888 "ffdhe6144", 00:08:13.888 "ffdhe8192" 00:08:13.888 ] 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "nvmf_set_max_subsystems", 00:08:13.888 "params": { 00:08:13.888 "max_subsystems": 1024 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "nvmf_set_crdt", 00:08:13.888 "params": { 00:08:13.888 "crdt1": 0, 00:08:13.888 "crdt2": 0, 00:08:13.888 "crdt3": 0 00:08:13.888 } 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "method": "nvmf_create_transport", 00:08:13.888 "params": { 00:08:13.888 "trtype": "TCP", 00:08:13.888 "max_queue_depth": 128, 00:08:13.888 "max_io_qpairs_per_ctrlr": 127, 00:08:13.888 "in_capsule_data_size": 4096, 00:08:13.888 "max_io_size": 131072, 00:08:13.888 "io_unit_size": 131072, 00:08:13.888 "max_aq_depth": 128, 00:08:13.888 "num_shared_buffers": 511, 00:08:13.888 "buf_cache_size": 4294967295, 00:08:13.888 "dif_insert_or_strip": false, 00:08:13.888 "zcopy": false, 00:08:13.888 "c2h_success": true, 00:08:13.888 "sock_priority": 0, 00:08:13.888 "abort_timeout_sec": 1, 00:08:13.888 "ack_timeout": 0, 00:08:13.888 "data_wr_pool_size": 0 00:08:13.888 } 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 }, 00:08:13.888 { 00:08:13.888 "subsystem": "iscsi", 00:08:13.888 "config": [ 00:08:13.888 { 00:08:13.888 "method": "iscsi_set_options", 00:08:13.888 "params": { 00:08:13.888 "node_base": "iqn.2016-06.io.spdk", 00:08:13.888 "max_sessions": 128, 00:08:13.888 "max_connections_per_session": 2, 00:08:13.888 "max_queue_depth": 64, 00:08:13.888 "default_time2wait": 2, 00:08:13.888 "default_time2retain": 20, 00:08:13.888 "first_burst_length": 8192, 00:08:13.888 "immediate_data": true, 00:08:13.888 "allow_duplicated_isid": false, 00:08:13.888 "error_recovery_level": 0, 00:08:13.888 "nop_timeout": 60, 00:08:13.888 "nop_in_interval": 30, 00:08:13.888 "disable_chap": false, 00:08:13.888 "require_chap": false, 00:08:13.888 "mutual_chap": false, 00:08:13.888 "chap_group": 0, 00:08:13.888 "max_large_datain_per_connection": 64, 00:08:13.888 "max_r2t_per_connection": 4, 00:08:13.888 "pdu_pool_size": 36864, 00:08:13.888 "immediate_data_pool_size": 16384, 00:08:13.888 "data_out_pool_size": 2048 00:08:13.888 } 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 } 00:08:13.888 ] 00:08:13.888 } 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57298 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57298 ']' 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57298 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57298 00:08:13.888 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.889 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.889 killing process with pid 57298 00:08:13.889 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57298' 00:08:13.889 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57298 00:08:13.889 14:07:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57298 00:08:16.420 14:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57354 00:08:16.420 14:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:16.420 14:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57354 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57354 ']' 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57354 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57354 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.696 killing process with pid 57354 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57354' 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57354 00:08:21.696 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57354 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:23.601 00:08:23.601 real 0m11.084s 00:08:23.601 user 0m10.438s 00:08:23.601 sys 0m1.053s 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:23.601 ************************************ 00:08:23.601 END TEST skip_rpc_with_json 00:08:23.601 ************************************ 00:08:23.601 14:07:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:23.601 14:07:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.601 14:07:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.601 14:07:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.601 ************************************ 00:08:23.601 START TEST skip_rpc_with_delay 00:08:23.601 ************************************ 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:23.601 14:07:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:23.601 [2024-11-27 14:07:53.951914] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:23.601 14:07:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:23.601 14:07:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.601 14:07:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:23.601 14:07:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.601 00:08:23.601 real 0m0.165s 00:08:23.601 user 0m0.094s 00:08:23.601 sys 0m0.069s 00:08:23.601 14:07:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.601 14:07:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:23.601 ************************************ 00:08:23.601 END TEST skip_rpc_with_delay 00:08:23.601 ************************************ 00:08:23.601 14:07:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:23.601 14:07:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:23.601 14:07:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:23.601 14:07:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.602 14:07:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.602 14:07:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.602 ************************************ 00:08:23.602 START TEST exit_on_failed_rpc_init 00:08:23.602 ************************************ 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57482 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57482 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57482 ']' 00:08:23.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.602 14:07:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:23.860 [2024-11-27 14:07:54.222731] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:23.860 [2024-11-27 14:07:54.222969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57482 ] 00:08:24.117 [2024-11-27 14:07:54.414843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.117 [2024-11-27 14:07:54.575688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:25.052 14:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:25.311 [2024-11-27 14:07:55.593320] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:25.311 [2024-11-27 14:07:55.593502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57501 ] 00:08:25.311 [2024-11-27 14:07:55.773328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.570 [2024-11-27 14:07:55.908357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.570 [2024-11-27 14:07:55.908492] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:25.570 [2024-11-27 14:07:55.908514] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:25.570 [2024-11-27 14:07:55.908530] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57482 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57482 ']' 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57482 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:25.828 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.829 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57482 00:08:25.829 killing process with pid 57482 00:08:25.829 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.829 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.829 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57482' 00:08:25.829 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57482 00:08:25.829 14:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57482 00:08:28.360 00:08:28.360 real 0m4.522s 00:08:28.360 user 0m5.050s 00:08:28.360 sys 0m0.698s 00:08:28.361 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.361 ************************************ 00:08:28.361 END TEST exit_on_failed_rpc_init 00:08:28.361 ************************************ 00:08:28.361 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:28.361 14:07:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:28.361 ************************************ 00:08:28.361 END TEST skip_rpc 00:08:28.361 ************************************ 00:08:28.361 00:08:28.361 real 0m23.455s 00:08:28.361 user 0m22.490s 00:08:28.361 sys 0m2.491s 00:08:28.361 14:07:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.361 14:07:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.361 14:07:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:28.361 14:07:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.361 14:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.361 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:08:28.361 ************************************ 00:08:28.361 START TEST rpc_client 00:08:28.361 ************************************ 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:28.361 * Looking for test storage... 00:08:28.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.361 14:07:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.361 --rc genhtml_branch_coverage=1 00:08:28.361 --rc genhtml_function_coverage=1 00:08:28.361 --rc genhtml_legend=1 00:08:28.361 --rc geninfo_all_blocks=1 00:08:28.361 --rc geninfo_unexecuted_blocks=1 00:08:28.361 00:08:28.361 ' 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.361 --rc genhtml_branch_coverage=1 00:08:28.361 --rc genhtml_function_coverage=1 00:08:28.361 --rc genhtml_legend=1 00:08:28.361 --rc geninfo_all_blocks=1 00:08:28.361 --rc geninfo_unexecuted_blocks=1 00:08:28.361 00:08:28.361 ' 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.361 --rc genhtml_branch_coverage=1 00:08:28.361 --rc genhtml_function_coverage=1 00:08:28.361 --rc genhtml_legend=1 00:08:28.361 --rc geninfo_all_blocks=1 00:08:28.361 --rc geninfo_unexecuted_blocks=1 00:08:28.361 00:08:28.361 ' 00:08:28.361 14:07:58 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.361 --rc genhtml_branch_coverage=1 00:08:28.361 --rc genhtml_function_coverage=1 00:08:28.361 --rc genhtml_legend=1 00:08:28.361 --rc geninfo_all_blocks=1 00:08:28.361 --rc geninfo_unexecuted_blocks=1 00:08:28.361 00:08:28.361 ' 00:08:28.361 14:07:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:28.619 OK 00:08:28.619 14:07:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:28.619 00:08:28.619 real 0m0.252s 00:08:28.619 user 0m0.157s 00:08:28.619 sys 0m0.103s 00:08:28.620 14:07:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.620 14:07:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 ************************************ 00:08:28.620 END TEST rpc_client 00:08:28.620 ************************************ 00:08:28.620 14:07:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:28.620 14:07:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.620 14:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.620 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 ************************************ 00:08:28.620 START TEST json_config 00:08:28.620 ************************************ 00:08:28.620 14:07:58 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:28.620 14:07:59 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:28.620 14:07:59 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:28.620 14:07:59 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.880 14:07:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.880 14:07:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.880 14:07:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.880 14:07:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.880 14:07:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.880 14:07:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:28.880 14:07:59 json_config -- scripts/common.sh@345 -- # : 1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.880 14:07:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.880 14:07:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@353 -- # local d=1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.880 14:07:59 json_config -- scripts/common.sh@355 -- # echo 1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.880 14:07:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@353 -- # local d=2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.880 14:07:59 json_config -- scripts/common.sh@355 -- # echo 2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.880 14:07:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.880 14:07:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.880 14:07:59 json_config -- scripts/common.sh@368 -- # return 0 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:28.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.880 --rc genhtml_branch_coverage=1 00:08:28.880 --rc genhtml_function_coverage=1 00:08:28.880 --rc genhtml_legend=1 00:08:28.880 --rc geninfo_all_blocks=1 00:08:28.880 --rc geninfo_unexecuted_blocks=1 00:08:28.880 00:08:28.880 ' 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:28.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.880 --rc genhtml_branch_coverage=1 00:08:28.880 --rc genhtml_function_coverage=1 00:08:28.880 --rc genhtml_legend=1 00:08:28.880 --rc geninfo_all_blocks=1 00:08:28.880 --rc geninfo_unexecuted_blocks=1 00:08:28.880 00:08:28.880 ' 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:28.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.880 --rc genhtml_branch_coverage=1 00:08:28.880 --rc genhtml_function_coverage=1 00:08:28.880 --rc genhtml_legend=1 00:08:28.880 --rc geninfo_all_blocks=1 00:08:28.880 --rc geninfo_unexecuted_blocks=1 00:08:28.880 00:08:28.880 ' 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:28.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.880 --rc genhtml_branch_coverage=1 00:08:28.880 --rc genhtml_function_coverage=1 00:08:28.880 --rc genhtml_legend=1 00:08:28.880 --rc geninfo_all_blocks=1 00:08:28.880 --rc geninfo_unexecuted_blocks=1 00:08:28.880 00:08:28.880 ' 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5783c65c-7dfc-4d47-9814-200973a46653 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5783c65c-7dfc-4d47-9814-200973a46653 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.880 14:07:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.880 14:07:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.880 14:07:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.880 14:07:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.880 14:07:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.880 14:07:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.880 14:07:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.880 14:07:59 json_config -- paths/export.sh@5 -- # export PATH 00:08:28.880 14:07:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@51 -- # : 0 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.880 14:07:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.880 WARNING: No tests are enabled so not running JSON configuration tests 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:28.880 14:07:59 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:28.880 ************************************ 00:08:28.880 END TEST json_config 00:08:28.880 ************************************ 00:08:28.880 00:08:28.880 real 0m0.196s 00:08:28.880 user 0m0.126s 00:08:28.880 sys 0m0.071s 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.880 14:07:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.880 14:07:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:28.880 14:07:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.880 14:07:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.880 14:07:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.880 ************************************ 00:08:28.880 START TEST json_config_extra_key 00:08:28.880 ************************************ 00:08:28.880 14:07:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:28.880 14:07:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:28.880 14:07:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:28.880 14:07:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:28.880 14:07:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:28.880 14:07:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.138 14:07:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:29.138 14:07:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.138 14:07:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.138 --rc genhtml_branch_coverage=1 00:08:29.138 --rc genhtml_function_coverage=1 00:08:29.138 --rc genhtml_legend=1 00:08:29.138 --rc geninfo_all_blocks=1 00:08:29.138 --rc geninfo_unexecuted_blocks=1 00:08:29.138 00:08:29.138 ' 00:08:29.138 14:07:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.138 --rc genhtml_branch_coverage=1 00:08:29.138 --rc genhtml_function_coverage=1 00:08:29.138 --rc genhtml_legend=1 00:08:29.138 --rc geninfo_all_blocks=1 00:08:29.138 --rc geninfo_unexecuted_blocks=1 00:08:29.138 00:08:29.138 ' 00:08:29.138 14:07:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.139 --rc genhtml_branch_coverage=1 00:08:29.139 --rc genhtml_function_coverage=1 00:08:29.139 --rc genhtml_legend=1 00:08:29.139 --rc geninfo_all_blocks=1 00:08:29.139 --rc geninfo_unexecuted_blocks=1 00:08:29.139 00:08:29.139 ' 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.139 --rc genhtml_branch_coverage=1 00:08:29.139 --rc genhtml_function_coverage=1 00:08:29.139 --rc genhtml_legend=1 00:08:29.139 --rc geninfo_all_blocks=1 00:08:29.139 --rc geninfo_unexecuted_blocks=1 00:08:29.139 00:08:29.139 ' 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5783c65c-7dfc-4d47-9814-200973a46653 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5783c65c-7dfc-4d47-9814-200973a46653 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.139 14:07:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.139 14:07:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.139 14:07:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.139 14:07:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.139 14:07:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.139 14:07:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.139 14:07:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.139 14:07:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:29.139 14:07:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.139 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.139 14:07:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:29.139 INFO: launching applications... 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:29.139 14:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57710 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:29.139 Waiting for target to run... 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57710 /var/tmp/spdk_tgt.sock 00:08:29.139 14:07:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57710 ']' 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:29.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.139 14:07:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:29.139 [2024-11-27 14:07:59.545007] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:29.139 [2024-11-27 14:07:59.545507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57710 ] 00:08:29.706 [2024-11-27 14:08:00.027439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.706 [2024-11-27 14:08:00.185630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.642 00:08:30.642 INFO: shutting down applications... 00:08:30.642 14:08:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.642 14:08:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:30.642 14:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:30.642 14:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57710 ]] 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57710 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:30.642 14:08:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:31.211 14:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:31.211 14:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:31.211 14:08:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:31.211 14:08:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:31.470 14:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:31.470 14:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:31.470 14:08:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:31.470 14:08:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:32.037 14:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:32.037 14:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:32.037 14:08:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:32.037 14:08:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:32.605 14:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:32.605 14:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:32.605 14:08:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:32.605 14:08:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:33.263 14:08:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:33.263 14:08:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:33.263 14:08:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:33.263 14:08:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57710 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:33.522 SPDK target shutdown done 00:08:33.522 14:08:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:33.522 Success 00:08:33.522 14:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:33.522 00:08:33.522 real 0m4.713s 00:08:33.522 user 0m4.184s 00:08:33.522 sys 0m0.670s 00:08:33.522 ************************************ 00:08:33.522 END TEST json_config_extra_key 00:08:33.522 ************************************ 00:08:33.522 14:08:03 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.522 14:08:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:33.522 14:08:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:33.523 14:08:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.523 14:08:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.523 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.523 ************************************ 00:08:33.523 START TEST alias_rpc 00:08:33.523 ************************************ 00:08:33.523 14:08:03 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:33.783 * Looking for test storage... 00:08:33.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:33.783 14:08:04 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.783 14:08:04 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.783 14:08:04 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.783 14:08:04 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.783 14:08:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:33.783 14:08:04 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.783 14:08:04 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.783 --rc genhtml_branch_coverage=1 00:08:33.783 --rc genhtml_function_coverage=1 00:08:33.784 --rc genhtml_legend=1 00:08:33.784 --rc geninfo_all_blocks=1 00:08:33.784 --rc geninfo_unexecuted_blocks=1 00:08:33.784 00:08:33.784 ' 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.784 --rc genhtml_branch_coverage=1 00:08:33.784 --rc genhtml_function_coverage=1 00:08:33.784 --rc genhtml_legend=1 00:08:33.784 --rc geninfo_all_blocks=1 00:08:33.784 --rc geninfo_unexecuted_blocks=1 00:08:33.784 00:08:33.784 ' 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.784 --rc genhtml_branch_coverage=1 00:08:33.784 --rc genhtml_function_coverage=1 00:08:33.784 --rc genhtml_legend=1 00:08:33.784 --rc geninfo_all_blocks=1 00:08:33.784 --rc geninfo_unexecuted_blocks=1 00:08:33.784 00:08:33.784 ' 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.784 --rc genhtml_branch_coverage=1 00:08:33.784 --rc genhtml_function_coverage=1 00:08:33.784 --rc genhtml_legend=1 00:08:33.784 --rc geninfo_all_blocks=1 00:08:33.784 --rc geninfo_unexecuted_blocks=1 00:08:33.784 00:08:33.784 ' 00:08:33.784 14:08:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:33.784 14:08:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57827 00:08:33.784 14:08:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57827 00:08:33.784 14:08:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57827 ']' 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.784 14:08:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.043 [2024-11-27 14:08:04.336722] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:34.043 [2024-11-27 14:08:04.337157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57827 ] 00:08:34.043 [2024-11-27 14:08:04.520719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.302 [2024-11-27 14:08:04.661354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.238 14:08:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.238 14:08:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:35.238 14:08:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:35.498 14:08:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57827 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57827 ']' 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57827 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57827 00:08:35.498 killing process with pid 57827 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57827' 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 57827 00:08:35.498 14:08:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 57827 00:08:38.095 ************************************ 00:08:38.095 END TEST alias_rpc 00:08:38.095 ************************************ 00:08:38.095 00:08:38.095 real 0m4.192s 00:08:38.095 user 0m4.293s 00:08:38.095 sys 0m0.654s 00:08:38.095 14:08:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.095 14:08:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 14:08:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:38.095 14:08:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:38.095 14:08:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.095 14:08:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.095 14:08:08 -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 ************************************ 00:08:38.095 START TEST spdkcli_tcp 00:08:38.095 ************************************ 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:38.095 * Looking for test storage... 00:08:38.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.095 14:08:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.095 --rc genhtml_branch_coverage=1 00:08:38.095 --rc genhtml_function_coverage=1 00:08:38.095 --rc genhtml_legend=1 00:08:38.095 --rc geninfo_all_blocks=1 00:08:38.095 --rc geninfo_unexecuted_blocks=1 00:08:38.095 00:08:38.095 ' 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.095 --rc genhtml_branch_coverage=1 00:08:38.095 --rc genhtml_function_coverage=1 00:08:38.095 --rc genhtml_legend=1 00:08:38.095 --rc geninfo_all_blocks=1 00:08:38.095 --rc geninfo_unexecuted_blocks=1 00:08:38.095 00:08:38.095 ' 00:08:38.095 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.095 --rc genhtml_branch_coverage=1 00:08:38.095 --rc genhtml_function_coverage=1 00:08:38.095 --rc genhtml_legend=1 00:08:38.095 --rc geninfo_all_blocks=1 00:08:38.095 --rc geninfo_unexecuted_blocks=1 00:08:38.095 00:08:38.095 ' 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.096 --rc genhtml_branch_coverage=1 00:08:38.096 --rc genhtml_function_coverage=1 00:08:38.096 --rc genhtml_legend=1 00:08:38.096 --rc geninfo_all_blocks=1 00:08:38.096 --rc geninfo_unexecuted_blocks=1 00:08:38.096 00:08:38.096 ' 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57934 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57934 00:08:38.096 14:08:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57934 ']' 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.096 14:08:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.096 [2024-11-27 14:08:08.565198] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:38.096 [2024-11-27 14:08:08.565637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57934 ] 00:08:38.353 [2024-11-27 14:08:08.753624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.611 [2024-11-27 14:08:08.895076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.611 [2024-11-27 14:08:08.895093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.543 14:08:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.543 14:08:09 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:39.543 14:08:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57951 00:08:39.543 14:08:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:39.543 14:08:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:39.800 [ 00:08:39.800 "bdev_malloc_delete", 00:08:39.800 "bdev_malloc_create", 00:08:39.800 "bdev_null_resize", 00:08:39.800 "bdev_null_delete", 00:08:39.800 "bdev_null_create", 00:08:39.800 "bdev_nvme_cuse_unregister", 00:08:39.800 "bdev_nvme_cuse_register", 00:08:39.800 "bdev_opal_new_user", 00:08:39.800 "bdev_opal_set_lock_state", 00:08:39.800 "bdev_opal_delete", 00:08:39.800 "bdev_opal_get_info", 00:08:39.800 "bdev_opal_create", 00:08:39.800 "bdev_nvme_opal_revert", 00:08:39.800 "bdev_nvme_opal_init", 00:08:39.800 "bdev_nvme_send_cmd", 00:08:39.800 "bdev_nvme_set_keys", 00:08:39.800 "bdev_nvme_get_path_iostat", 00:08:39.800 "bdev_nvme_get_mdns_discovery_info", 00:08:39.800 "bdev_nvme_stop_mdns_discovery", 00:08:39.800 "bdev_nvme_start_mdns_discovery", 00:08:39.800 "bdev_nvme_set_multipath_policy", 00:08:39.800 "bdev_nvme_set_preferred_path", 00:08:39.800 "bdev_nvme_get_io_paths", 00:08:39.800 "bdev_nvme_remove_error_injection", 00:08:39.800 "bdev_nvme_add_error_injection", 00:08:39.800 "bdev_nvme_get_discovery_info", 00:08:39.800 "bdev_nvme_stop_discovery", 00:08:39.800 "bdev_nvme_start_discovery", 00:08:39.800 "bdev_nvme_get_controller_health_info", 00:08:39.800 "bdev_nvme_disable_controller", 00:08:39.800 "bdev_nvme_enable_controller", 00:08:39.800 "bdev_nvme_reset_controller", 00:08:39.800 "bdev_nvme_get_transport_statistics", 00:08:39.800 "bdev_nvme_apply_firmware", 00:08:39.800 "bdev_nvme_detach_controller", 00:08:39.800 "bdev_nvme_get_controllers", 00:08:39.800 "bdev_nvme_attach_controller", 00:08:39.800 "bdev_nvme_set_hotplug", 00:08:39.800 "bdev_nvme_set_options", 00:08:39.800 "bdev_passthru_delete", 00:08:39.800 "bdev_passthru_create", 00:08:39.800 "bdev_lvol_set_parent_bdev", 00:08:39.800 "bdev_lvol_set_parent", 00:08:39.800 "bdev_lvol_check_shallow_copy", 00:08:39.800 "bdev_lvol_start_shallow_copy", 00:08:39.800 "bdev_lvol_grow_lvstore", 00:08:39.800 "bdev_lvol_get_lvols", 00:08:39.800 "bdev_lvol_get_lvstores", 00:08:39.800 "bdev_lvol_delete", 00:08:39.800 "bdev_lvol_set_read_only", 00:08:39.800 "bdev_lvol_resize", 00:08:39.800 "bdev_lvol_decouple_parent", 00:08:39.800 "bdev_lvol_inflate", 00:08:39.800 "bdev_lvol_rename", 00:08:39.800 "bdev_lvol_clone_bdev", 00:08:39.800 "bdev_lvol_clone", 00:08:39.800 "bdev_lvol_snapshot", 00:08:39.800 "bdev_lvol_create", 00:08:39.800 "bdev_lvol_delete_lvstore", 00:08:39.800 "bdev_lvol_rename_lvstore", 00:08:39.800 "bdev_lvol_create_lvstore", 00:08:39.800 "bdev_raid_set_options", 00:08:39.800 "bdev_raid_remove_base_bdev", 00:08:39.800 "bdev_raid_add_base_bdev", 00:08:39.800 "bdev_raid_delete", 00:08:39.800 "bdev_raid_create", 00:08:39.800 "bdev_raid_get_bdevs", 00:08:39.800 "bdev_error_inject_error", 00:08:39.800 "bdev_error_delete", 00:08:39.800 "bdev_error_create", 00:08:39.800 "bdev_split_delete", 00:08:39.800 "bdev_split_create", 00:08:39.800 "bdev_delay_delete", 00:08:39.800 "bdev_delay_create", 00:08:39.800 "bdev_delay_update_latency", 00:08:39.800 "bdev_zone_block_delete", 00:08:39.800 "bdev_zone_block_create", 00:08:39.800 "blobfs_create", 00:08:39.800 "blobfs_detect", 00:08:39.800 "blobfs_set_cache_size", 00:08:39.800 "bdev_aio_delete", 00:08:39.800 "bdev_aio_rescan", 00:08:39.800 "bdev_aio_create", 00:08:39.800 "bdev_ftl_set_property", 00:08:39.800 "bdev_ftl_get_properties", 00:08:39.800 "bdev_ftl_get_stats", 00:08:39.800 "bdev_ftl_unmap", 00:08:39.800 "bdev_ftl_unload", 00:08:39.800 "bdev_ftl_delete", 00:08:39.800 "bdev_ftl_load", 00:08:39.800 "bdev_ftl_create", 00:08:39.800 "bdev_virtio_attach_controller", 00:08:39.800 "bdev_virtio_scsi_get_devices", 00:08:39.800 "bdev_virtio_detach_controller", 00:08:39.800 "bdev_virtio_blk_set_hotplug", 00:08:39.800 "bdev_iscsi_delete", 00:08:39.800 "bdev_iscsi_create", 00:08:39.800 "bdev_iscsi_set_options", 00:08:39.800 "accel_error_inject_error", 00:08:39.800 "ioat_scan_accel_module", 00:08:39.800 "dsa_scan_accel_module", 00:08:39.800 "iaa_scan_accel_module", 00:08:39.800 "keyring_file_remove_key", 00:08:39.800 "keyring_file_add_key", 00:08:39.800 "keyring_linux_set_options", 00:08:39.800 "fsdev_aio_delete", 00:08:39.800 "fsdev_aio_create", 00:08:39.800 "iscsi_get_histogram", 00:08:39.800 "iscsi_enable_histogram", 00:08:39.800 "iscsi_set_options", 00:08:39.800 "iscsi_get_auth_groups", 00:08:39.800 "iscsi_auth_group_remove_secret", 00:08:39.800 "iscsi_auth_group_add_secret", 00:08:39.800 "iscsi_delete_auth_group", 00:08:39.800 "iscsi_create_auth_group", 00:08:39.800 "iscsi_set_discovery_auth", 00:08:39.800 "iscsi_get_options", 00:08:39.800 "iscsi_target_node_request_logout", 00:08:39.800 "iscsi_target_node_set_redirect", 00:08:39.800 "iscsi_target_node_set_auth", 00:08:39.800 "iscsi_target_node_add_lun", 00:08:39.800 "iscsi_get_stats", 00:08:39.800 "iscsi_get_connections", 00:08:39.800 "iscsi_portal_group_set_auth", 00:08:39.800 "iscsi_start_portal_group", 00:08:39.800 "iscsi_delete_portal_group", 00:08:39.800 "iscsi_create_portal_group", 00:08:39.800 "iscsi_get_portal_groups", 00:08:39.800 "iscsi_delete_target_node", 00:08:39.800 "iscsi_target_node_remove_pg_ig_maps", 00:08:39.800 "iscsi_target_node_add_pg_ig_maps", 00:08:39.800 "iscsi_create_target_node", 00:08:39.800 "iscsi_get_target_nodes", 00:08:39.800 "iscsi_delete_initiator_group", 00:08:39.800 "iscsi_initiator_group_remove_initiators", 00:08:39.800 "iscsi_initiator_group_add_initiators", 00:08:39.800 "iscsi_create_initiator_group", 00:08:39.800 "iscsi_get_initiator_groups", 00:08:39.800 "nvmf_set_crdt", 00:08:39.800 "nvmf_set_config", 00:08:39.800 "nvmf_set_max_subsystems", 00:08:39.800 "nvmf_stop_mdns_prr", 00:08:39.800 "nvmf_publish_mdns_prr", 00:08:39.800 "nvmf_subsystem_get_listeners", 00:08:39.800 "nvmf_subsystem_get_qpairs", 00:08:39.800 "nvmf_subsystem_get_controllers", 00:08:39.801 "nvmf_get_stats", 00:08:39.801 "nvmf_get_transports", 00:08:39.801 "nvmf_create_transport", 00:08:39.801 "nvmf_get_targets", 00:08:39.801 "nvmf_delete_target", 00:08:39.801 "nvmf_create_target", 00:08:39.801 "nvmf_subsystem_allow_any_host", 00:08:39.801 "nvmf_subsystem_set_keys", 00:08:39.801 "nvmf_subsystem_remove_host", 00:08:39.801 "nvmf_subsystem_add_host", 00:08:39.801 "nvmf_ns_remove_host", 00:08:39.801 "nvmf_ns_add_host", 00:08:39.801 "nvmf_subsystem_remove_ns", 00:08:39.801 "nvmf_subsystem_set_ns_ana_group", 00:08:39.801 "nvmf_subsystem_add_ns", 00:08:39.801 "nvmf_subsystem_listener_set_ana_state", 00:08:39.801 "nvmf_discovery_get_referrals", 00:08:39.801 "nvmf_discovery_remove_referral", 00:08:39.801 "nvmf_discovery_add_referral", 00:08:39.801 "nvmf_subsystem_remove_listener", 00:08:39.801 "nvmf_subsystem_add_listener", 00:08:39.801 "nvmf_delete_subsystem", 00:08:39.801 "nvmf_create_subsystem", 00:08:39.801 "nvmf_get_subsystems", 00:08:39.801 "env_dpdk_get_mem_stats", 00:08:39.801 "nbd_get_disks", 00:08:39.801 "nbd_stop_disk", 00:08:39.801 "nbd_start_disk", 00:08:39.801 "ublk_recover_disk", 00:08:39.801 "ublk_get_disks", 00:08:39.801 "ublk_stop_disk", 00:08:39.801 "ublk_start_disk", 00:08:39.801 "ublk_destroy_target", 00:08:39.801 "ublk_create_target", 00:08:39.801 "virtio_blk_create_transport", 00:08:39.801 "virtio_blk_get_transports", 00:08:39.801 "vhost_controller_set_coalescing", 00:08:39.801 "vhost_get_controllers", 00:08:39.801 "vhost_delete_controller", 00:08:39.801 "vhost_create_blk_controller", 00:08:39.801 "vhost_scsi_controller_remove_target", 00:08:39.801 "vhost_scsi_controller_add_target", 00:08:39.801 "vhost_start_scsi_controller", 00:08:39.801 "vhost_create_scsi_controller", 00:08:39.801 "thread_set_cpumask", 00:08:39.801 "scheduler_set_options", 00:08:39.801 "framework_get_governor", 00:08:39.801 "framework_get_scheduler", 00:08:39.801 "framework_set_scheduler", 00:08:39.801 "framework_get_reactors", 00:08:39.801 "thread_get_io_channels", 00:08:39.801 "thread_get_pollers", 00:08:39.801 "thread_get_stats", 00:08:39.801 "framework_monitor_context_switch", 00:08:39.801 "spdk_kill_instance", 00:08:39.801 "log_enable_timestamps", 00:08:39.801 "log_get_flags", 00:08:39.801 "log_clear_flag", 00:08:39.801 "log_set_flag", 00:08:39.801 "log_get_level", 00:08:39.801 "log_set_level", 00:08:39.801 "log_get_print_level", 00:08:39.801 "log_set_print_level", 00:08:39.801 "framework_enable_cpumask_locks", 00:08:39.801 "framework_disable_cpumask_locks", 00:08:39.801 "framework_wait_init", 00:08:39.801 "framework_start_init", 00:08:39.801 "scsi_get_devices", 00:08:39.801 "bdev_get_histogram", 00:08:39.801 "bdev_enable_histogram", 00:08:39.801 "bdev_set_qos_limit", 00:08:39.801 "bdev_set_qd_sampling_period", 00:08:39.801 "bdev_get_bdevs", 00:08:39.801 "bdev_reset_iostat", 00:08:39.801 "bdev_get_iostat", 00:08:39.801 "bdev_examine", 00:08:39.801 "bdev_wait_for_examine", 00:08:39.801 "bdev_set_options", 00:08:39.801 "accel_get_stats", 00:08:39.801 "accel_set_options", 00:08:39.801 "accel_set_driver", 00:08:39.801 "accel_crypto_key_destroy", 00:08:39.801 "accel_crypto_keys_get", 00:08:39.801 "accel_crypto_key_create", 00:08:39.801 "accel_assign_opc", 00:08:39.801 "accel_get_module_info", 00:08:39.801 "accel_get_opc_assignments", 00:08:39.801 "vmd_rescan", 00:08:39.801 "vmd_remove_device", 00:08:39.801 "vmd_enable", 00:08:39.801 "sock_get_default_impl", 00:08:39.801 "sock_set_default_impl", 00:08:39.801 "sock_impl_set_options", 00:08:39.801 "sock_impl_get_options", 00:08:39.801 "iobuf_get_stats", 00:08:39.801 "iobuf_set_options", 00:08:39.801 "keyring_get_keys", 00:08:39.801 "framework_get_pci_devices", 00:08:39.801 "framework_get_config", 00:08:39.801 "framework_get_subsystems", 00:08:39.801 "fsdev_set_opts", 00:08:39.801 "fsdev_get_opts", 00:08:39.801 "trace_get_info", 00:08:39.801 "trace_get_tpoint_group_mask", 00:08:39.801 "trace_disable_tpoint_group", 00:08:39.801 "trace_enable_tpoint_group", 00:08:39.801 "trace_clear_tpoint_mask", 00:08:39.801 "trace_set_tpoint_mask", 00:08:39.801 "notify_get_notifications", 00:08:39.801 "notify_get_types", 00:08:39.801 "spdk_get_version", 00:08:39.801 "rpc_get_methods" 00:08:39.801 ] 00:08:39.801 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.801 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:39.801 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57934 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57934 ']' 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57934 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57934 00:08:39.801 killing process with pid 57934 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57934' 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57934 00:08:39.801 14:08:10 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57934 00:08:42.328 ************************************ 00:08:42.328 END TEST spdkcli_tcp 00:08:42.328 ************************************ 00:08:42.328 00:08:42.328 real 0m4.143s 00:08:42.328 user 0m7.536s 00:08:42.328 sys 0m0.679s 00:08:42.328 14:08:12 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.329 14:08:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.329 14:08:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:42.329 14:08:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.329 14:08:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.329 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:08:42.329 ************************************ 00:08:42.329 START TEST dpdk_mem_utility 00:08:42.329 ************************************ 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:42.329 * Looking for test storage... 00:08:42.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.329 14:08:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.329 --rc genhtml_branch_coverage=1 00:08:42.329 --rc genhtml_function_coverage=1 00:08:42.329 --rc genhtml_legend=1 00:08:42.329 --rc geninfo_all_blocks=1 00:08:42.329 --rc geninfo_unexecuted_blocks=1 00:08:42.329 00:08:42.329 ' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.329 --rc genhtml_branch_coverage=1 00:08:42.329 --rc genhtml_function_coverage=1 00:08:42.329 --rc genhtml_legend=1 00:08:42.329 --rc geninfo_all_blocks=1 00:08:42.329 --rc geninfo_unexecuted_blocks=1 00:08:42.329 00:08:42.329 ' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.329 --rc genhtml_branch_coverage=1 00:08:42.329 --rc genhtml_function_coverage=1 00:08:42.329 --rc genhtml_legend=1 00:08:42.329 --rc geninfo_all_blocks=1 00:08:42.329 --rc geninfo_unexecuted_blocks=1 00:08:42.329 00:08:42.329 ' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.329 --rc genhtml_branch_coverage=1 00:08:42.329 --rc genhtml_function_coverage=1 00:08:42.329 --rc genhtml_legend=1 00:08:42.329 --rc geninfo_all_blocks=1 00:08:42.329 --rc geninfo_unexecuted_blocks=1 00:08:42.329 00:08:42.329 ' 00:08:42.329 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:42.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.329 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58056 00:08:42.329 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58056 00:08:42.329 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58056 ']' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.329 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:42.329 [2024-11-27 14:08:12.750541] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:42.329 [2024-11-27 14:08:12.750999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58056 ] 00:08:42.587 [2024-11-27 14:08:12.936231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.587 [2024-11-27 14:08:13.067494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.540 14:08:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.540 14:08:13 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:43.540 14:08:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:43.540 14:08:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:43.540 14:08:13 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.540 14:08:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:43.540 { 00:08:43.540 "filename": "/tmp/spdk_mem_dump.txt" 00:08:43.540 } 00:08:43.540 14:08:13 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.540 14:08:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:43.540 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:43.540 1 heaps totaling size 824.000000 MiB 00:08:43.540 size: 824.000000 MiB heap id: 0 00:08:43.540 end heaps---------- 00:08:43.540 9 mempools totaling size 603.782043 MiB 00:08:43.540 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:43.540 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:43.540 size: 100.555481 MiB name: bdev_io_58056 00:08:43.540 size: 50.003479 MiB name: msgpool_58056 00:08:43.540 size: 36.509338 MiB name: fsdev_io_58056 00:08:43.540 size: 21.763794 MiB name: PDU_Pool 00:08:43.540 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:43.540 size: 4.133484 MiB name: evtpool_58056 00:08:43.540 size: 0.026123 MiB name: Session_Pool 00:08:43.540 end mempools------- 00:08:43.540 6 memzones totaling size 4.142822 MiB 00:08:43.540 size: 1.000366 MiB name: RG_ring_0_58056 00:08:43.540 size: 1.000366 MiB name: RG_ring_1_58056 00:08:43.540 size: 1.000366 MiB name: RG_ring_4_58056 00:08:43.540 size: 1.000366 MiB name: RG_ring_5_58056 00:08:43.540 size: 0.125366 MiB name: RG_ring_2_58056 00:08:43.540 size: 0.015991 MiB name: RG_ring_3_58056 00:08:43.540 end memzones------- 00:08:43.540 14:08:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:43.800 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:08:43.800 list of free elements. size: 16.780396 MiB 00:08:43.800 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:43.800 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:43.800 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:43.800 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:43.800 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:43.800 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:43.800 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:43.800 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:43.800 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:43.800 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:43.800 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:43.800 element at address: 0x20001b400000 with size: 0.561462 MiB 00:08:43.800 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:43.800 element at address: 0x200019600000 with size: 0.488220 MiB 00:08:43.800 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:43.800 element at address: 0x200012c00000 with size: 0.433472 MiB 00:08:43.800 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:43.800 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:43.800 list of standard malloc elements. size: 199.288696 MiB 00:08:43.800 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:43.800 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:43.800 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:43.800 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:43.800 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:43.800 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:43.800 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:43.800 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:43.800 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:43.800 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:43.800 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:43.800 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:43.800 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:43.800 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:43.800 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:43.800 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:43.801 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:43.802 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:43.802 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:43.802 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:43.803 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:43.803 list of memzone associated elements. size: 607.930908 MiB 00:08:43.803 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:43.803 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:43.803 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:43.803 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:43.803 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:43.803 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58056_0 00:08:43.803 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:43.803 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58056_0 00:08:43.803 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:43.803 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58056_0 00:08:43.803 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:43.803 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:43.803 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:43.803 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:43.803 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:43.803 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58056_0 00:08:43.803 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:43.803 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58056 00:08:43.803 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:43.803 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58056 00:08:43.803 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:43.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:43.803 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:43.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:43.803 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:43.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:43.803 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:43.803 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:43.803 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:43.803 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58056 00:08:43.803 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:43.803 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58056 00:08:43.803 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:43.803 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58056 00:08:43.803 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:43.803 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58056 00:08:43.803 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:43.803 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58056 00:08:43.803 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:43.803 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58056 00:08:43.803 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:43.803 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:43.803 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:43.803 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:43.803 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:43.803 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:43.803 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:43.803 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58056 00:08:43.803 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:43.803 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58056 00:08:43.803 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:43.803 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:43.803 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:43.803 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:43.803 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:43.803 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58056 00:08:43.803 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:43.803 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:43.803 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:43.803 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58056 00:08:43.803 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:43.803 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58056 00:08:43.803 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:43.803 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58056 00:08:43.803 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:43.803 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:43.803 14:08:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:43.803 14:08:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58056 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58056 ']' 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58056 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58056 00:08:43.803 killing process with pid 58056 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58056' 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58056 00:08:43.803 14:08:14 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58056 00:08:46.338 00:08:46.338 real 0m3.959s 00:08:46.338 user 0m3.978s 00:08:46.338 sys 0m0.627s 00:08:46.338 14:08:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.338 ************************************ 00:08:46.338 14:08:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:46.338 END TEST dpdk_mem_utility 00:08:46.338 ************************************ 00:08:46.338 14:08:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:46.338 14:08:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.338 14:08:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.338 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.338 ************************************ 00:08:46.338 START TEST event 00:08:46.338 ************************************ 00:08:46.338 14:08:16 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:46.338 * Looking for test storage... 00:08:46.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:46.338 14:08:16 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.338 14:08:16 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.338 14:08:16 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.338 14:08:16 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.338 14:08:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.338 14:08:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.338 14:08:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.338 14:08:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.338 14:08:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.338 14:08:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.338 14:08:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.338 14:08:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.338 14:08:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.338 14:08:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.338 14:08:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.338 14:08:16 event -- scripts/common.sh@344 -- # case "$op" in 00:08:46.338 14:08:16 event -- scripts/common.sh@345 -- # : 1 00:08:46.338 14:08:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.338 14:08:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.338 14:08:16 event -- scripts/common.sh@365 -- # decimal 1 00:08:46.338 14:08:16 event -- scripts/common.sh@353 -- # local d=1 00:08:46.338 14:08:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.338 14:08:16 event -- scripts/common.sh@355 -- # echo 1 00:08:46.338 14:08:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.338 14:08:16 event -- scripts/common.sh@366 -- # decimal 2 00:08:46.338 14:08:16 event -- scripts/common.sh@353 -- # local d=2 00:08:46.338 14:08:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.338 14:08:16 event -- scripts/common.sh@355 -- # echo 2 00:08:46.338 14:08:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.338 14:08:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.339 14:08:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.339 14:08:16 event -- scripts/common.sh@368 -- # return 0 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.339 --rc genhtml_branch_coverage=1 00:08:46.339 --rc genhtml_function_coverage=1 00:08:46.339 --rc genhtml_legend=1 00:08:46.339 --rc geninfo_all_blocks=1 00:08:46.339 --rc geninfo_unexecuted_blocks=1 00:08:46.339 00:08:46.339 ' 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.339 --rc genhtml_branch_coverage=1 00:08:46.339 --rc genhtml_function_coverage=1 00:08:46.339 --rc genhtml_legend=1 00:08:46.339 --rc geninfo_all_blocks=1 00:08:46.339 --rc geninfo_unexecuted_blocks=1 00:08:46.339 00:08:46.339 ' 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.339 --rc genhtml_branch_coverage=1 00:08:46.339 --rc genhtml_function_coverage=1 00:08:46.339 --rc genhtml_legend=1 00:08:46.339 --rc geninfo_all_blocks=1 00:08:46.339 --rc geninfo_unexecuted_blocks=1 00:08:46.339 00:08:46.339 ' 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.339 --rc genhtml_branch_coverage=1 00:08:46.339 --rc genhtml_function_coverage=1 00:08:46.339 --rc genhtml_legend=1 00:08:46.339 --rc geninfo_all_blocks=1 00:08:46.339 --rc geninfo_unexecuted_blocks=1 00:08:46.339 00:08:46.339 ' 00:08:46.339 14:08:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:46.339 14:08:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:46.339 14:08:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:46.339 14:08:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.339 14:08:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.339 ************************************ 00:08:46.339 START TEST event_perf 00:08:46.339 ************************************ 00:08:46.339 14:08:16 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:46.339 Running I/O for 1 seconds...[2024-11-27 14:08:16.720959] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:46.339 [2024-11-27 14:08:16.721316] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58164 ] 00:08:46.598 [2024-11-27 14:08:16.910684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.598 [2024-11-27 14:08:17.095825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.598 [2024-11-27 14:08:17.096002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.598 [2024-11-27 14:08:17.096898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.598 [2024-11-27 14:08:17.096969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.978 Running I/O for 1 seconds... 00:08:47.978 lcore 0: 201275 00:08:47.978 lcore 1: 201274 00:08:47.978 lcore 2: 201274 00:08:47.978 lcore 3: 201274 00:08:47.978 done. 00:08:47.978 00:08:47.978 real 0m1.655s 00:08:47.978 user 0m4.397s 00:08:47.978 sys 0m0.132s 00:08:47.978 14:08:18 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.978 14:08:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:47.978 ************************************ 00:08:47.978 END TEST event_perf 00:08:47.978 ************************************ 00:08:47.978 14:08:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:47.978 14:08:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:47.978 14:08:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.978 14:08:18 event -- common/autotest_common.sh@10 -- # set +x 00:08:47.978 ************************************ 00:08:47.978 START TEST event_reactor 00:08:47.978 ************************************ 00:08:47.978 14:08:18 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:47.978 [2024-11-27 14:08:18.418718] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:47.978 [2024-11-27 14:08:18.418874] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58208 ] 00:08:48.237 [2024-11-27 14:08:18.591568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.237 [2024-11-27 14:08:18.720053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.629 test_start 00:08:49.629 oneshot 00:08:49.629 tick 100 00:08:49.629 tick 100 00:08:49.629 tick 250 00:08:49.629 tick 100 00:08:49.629 tick 100 00:08:49.629 tick 250 00:08:49.629 tick 500 00:08:49.629 tick 100 00:08:49.629 tick 100 00:08:49.629 tick 100 00:08:49.629 tick 250 00:08:49.629 tick 100 00:08:49.629 tick 100 00:08:49.629 test_end 00:08:49.629 00:08:49.629 real 0m1.571s 00:08:49.629 user 0m1.360s 00:08:49.629 sys 0m0.102s 00:08:49.629 14:08:19 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.629 ************************************ 00:08:49.629 END TEST event_reactor 00:08:49.629 ************************************ 00:08:49.629 14:08:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 14:08:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:49.629 14:08:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:49.629 14:08:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.629 14:08:19 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 ************************************ 00:08:49.629 START TEST event_reactor_perf 00:08:49.629 ************************************ 00:08:49.629 14:08:20 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:49.629 [2024-11-27 14:08:20.044691] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:49.629 [2024-11-27 14:08:20.044889] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58240 ] 00:08:49.888 [2024-11-27 14:08:20.213356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.888 [2024-11-27 14:08:20.346733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.263 test_start 00:08:51.263 test_end 00:08:51.263 Performance: 280419 events per second 00:08:51.263 00:08:51.263 real 0m1.592s 00:08:51.264 user 0m1.390s 00:08:51.264 sys 0m0.093s 00:08:51.264 14:08:21 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.264 14:08:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:51.264 ************************************ 00:08:51.264 END TEST event_reactor_perf 00:08:51.264 ************************************ 00:08:51.264 14:08:21 event -- event/event.sh@49 -- # uname -s 00:08:51.264 14:08:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:51.264 14:08:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:51.264 14:08:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.264 14:08:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.264 14:08:21 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.264 ************************************ 00:08:51.264 START TEST event_scheduler 00:08:51.264 ************************************ 00:08:51.264 14:08:21 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:51.264 * Looking for test storage... 00:08:51.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:51.264 14:08:21 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.264 14:08:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.264 14:08:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:51.522 14:08:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.522 14:08:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:51.522 14:08:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.522 14:08:21 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:51.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.522 --rc genhtml_branch_coverage=1 00:08:51.522 --rc genhtml_function_coverage=1 00:08:51.523 --rc genhtml_legend=1 00:08:51.523 --rc geninfo_all_blocks=1 00:08:51.523 --rc geninfo_unexecuted_blocks=1 00:08:51.523 00:08:51.523 ' 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:51.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.523 --rc genhtml_branch_coverage=1 00:08:51.523 --rc genhtml_function_coverage=1 00:08:51.523 --rc genhtml_legend=1 00:08:51.523 --rc geninfo_all_blocks=1 00:08:51.523 --rc geninfo_unexecuted_blocks=1 00:08:51.523 00:08:51.523 ' 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:51.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.523 --rc genhtml_branch_coverage=1 00:08:51.523 --rc genhtml_function_coverage=1 00:08:51.523 --rc genhtml_legend=1 00:08:51.523 --rc geninfo_all_blocks=1 00:08:51.523 --rc geninfo_unexecuted_blocks=1 00:08:51.523 00:08:51.523 ' 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:51.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.523 --rc genhtml_branch_coverage=1 00:08:51.523 --rc genhtml_function_coverage=1 00:08:51.523 --rc genhtml_legend=1 00:08:51.523 --rc geninfo_all_blocks=1 00:08:51.523 --rc geninfo_unexecuted_blocks=1 00:08:51.523 00:08:51.523 ' 00:08:51.523 14:08:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:51.523 14:08:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58316 00:08:51.523 14:08:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.523 14:08:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:51.523 14:08:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58316 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58316 ']' 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.523 14:08:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:51.523 [2024-11-27 14:08:21.946914] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:51.523 [2024-11-27 14:08:21.947131] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:08:51.781 [2024-11-27 14:08:22.208665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.040 [2024-11-27 14:08:22.359371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.040 [2024-11-27 14:08:22.359477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.040 [2024-11-27 14:08:22.360211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.040 [2024-11-27 14:08:22.360244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:52.607 14:08:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:52.607 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.607 POWER: Cannot set governor of lcore 0 to userspace 00:08:52.607 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.607 POWER: Cannot set governor of lcore 0 to performance 00:08:52.607 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.607 POWER: Cannot set governor of lcore 0 to userspace 00:08:52.607 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.607 POWER: Cannot set governor of lcore 0 to userspace 00:08:52.607 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:52.607 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:52.607 POWER: Unable to set Power Management Environment for lcore 0 00:08:52.607 [2024-11-27 14:08:22.970147] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:52.607 [2024-11-27 14:08:22.970176] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:52.607 [2024-11-27 14:08:22.970191] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:52.607 [2024-11-27 14:08:22.970242] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:52.607 [2024-11-27 14:08:22.970260] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:52.607 [2024-11-27 14:08:22.970275] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.607 14:08:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.607 14:08:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 [2024-11-27 14:08:23.308006] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:52.866 14:08:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:52.866 14:08:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.866 14:08:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 ************************************ 00:08:52.866 START TEST scheduler_create_thread 00:08:52.866 ************************************ 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 2 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 3 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 4 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 5 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 6 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.866 7 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.866 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.125 8 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.125 9 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.125 10 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.125 14:08:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:54.501 14:08:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.501 14:08:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:54.501 14:08:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:54.501 14:08:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.501 14:08:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.436 14:08:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.436 00:08:55.436 real 0m2.617s 00:08:55.436 user 0m0.024s 00:08:55.436 sys 0m0.003s 00:08:55.436 ************************************ 00:08:55.436 END TEST scheduler_create_thread 00:08:55.436 ************************************ 00:08:55.436 14:08:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.436 14:08:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.694 14:08:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:55.694 14:08:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58316 00:08:55.694 14:08:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58316 ']' 00:08:55.694 14:08:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58316 00:08:55.694 14:08:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:55.694 14:08:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.694 14:08:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58316 00:08:55.694 14:08:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:55.694 killing process with pid 58316 00:08:55.694 14:08:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:55.694 14:08:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58316' 00:08:55.694 14:08:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58316 00:08:55.694 14:08:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58316 00:08:55.952 [2024-11-27 14:08:26.418539] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:57.327 00:08:57.327 real 0m5.876s 00:08:57.327 user 0m10.190s 00:08:57.327 sys 0m0.515s 00:08:57.327 ************************************ 00:08:57.327 END TEST event_scheduler 00:08:57.327 ************************************ 00:08:57.327 14:08:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.327 14:08:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:57.327 14:08:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:57.327 14:08:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:57.327 14:08:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.327 14:08:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.327 14:08:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.327 ************************************ 00:08:57.327 START TEST app_repeat 00:08:57.327 ************************************ 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58428 00:08:57.327 Process app_repeat pid: 58428 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58428' 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:57.327 spdk_app_start Round 0 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:57.327 14:08:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58428 ']' 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:57.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.327 14:08:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:57.327 [2024-11-27 14:08:27.638701] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:08:57.327 [2024-11-27 14:08:27.638868] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58428 ] 00:08:57.327 [2024-11-27 14:08:27.813467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.585 [2024-11-27 14:08:27.953337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.585 [2024-11-27 14:08:27.953348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.207 14:08:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.207 14:08:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:58.207 14:08:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:58.775 Malloc0 00:08:58.775 14:08:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:59.035 Malloc1 00:08:59.035 14:08:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:59.035 14:08:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:59.293 /dev/nbd0 00:08:59.293 14:08:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:59.293 14:08:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:59.293 14:08:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:59.294 1+0 records in 00:08:59.294 1+0 records out 00:08:59.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245966 s, 16.7 MB/s 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:59.294 14:08:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:59.294 14:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.294 14:08:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:59.294 14:08:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:59.551 /dev/nbd1 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:59.551 1+0 records in 00:08:59.551 1+0 records out 00:08:59.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295017 s, 13.9 MB/s 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:59.551 14:08:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.551 14:08:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.809 14:08:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:59.809 { 00:08:59.809 "nbd_device": "/dev/nbd0", 00:08:59.809 "bdev_name": "Malloc0" 00:08:59.809 }, 00:08:59.809 { 00:08:59.809 "nbd_device": "/dev/nbd1", 00:08:59.809 "bdev_name": "Malloc1" 00:08:59.809 } 00:08:59.809 ]' 00:08:59.809 14:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:59.809 { 00:08:59.809 "nbd_device": "/dev/nbd0", 00:08:59.809 "bdev_name": "Malloc0" 00:08:59.809 }, 00:08:59.809 { 00:08:59.809 "nbd_device": "/dev/nbd1", 00:08:59.809 "bdev_name": "Malloc1" 00:08:59.809 } 00:08:59.809 ]' 00:08:59.809 14:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:00.067 /dev/nbd1' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:00.067 /dev/nbd1' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:00.067 256+0 records in 00:09:00.067 256+0 records out 00:09:00.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639516 s, 164 MB/s 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:00.067 256+0 records in 00:09:00.067 256+0 records out 00:09:00.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289625 s, 36.2 MB/s 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:00.067 256+0 records in 00:09:00.067 256+0 records out 00:09:00.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342258 s, 30.6 MB/s 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.067 14:08:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:00.326 14:08:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.326 14:08:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.326 14:08:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.326 14:08:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.326 14:08:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.327 14:08:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.327 14:08:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:00.327 14:08:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.327 14:08:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.327 14:08:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.585 14:08:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.586 14:08:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.151 14:08:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:01.151 14:08:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:01.151 14:08:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.151 14:08:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:01.151 14:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:01.151 14:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.152 14:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:01.152 14:08:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:01.152 14:08:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:01.152 14:08:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:01.152 14:08:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:01.152 14:08:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:01.152 14:08:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:01.409 14:08:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:02.804 [2024-11-27 14:08:33.131587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.804 [2024-11-27 14:08:33.262011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.804 [2024-11-27 14:08:33.262021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.062 [2024-11-27 14:08:33.459350] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:03.062 [2024-11-27 14:08:33.459422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:04.439 spdk_app_start Round 1 00:09:04.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:04.439 14:08:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:04.439 14:08:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:04.439 14:08:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:09:04.439 14:08:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58428 ']' 00:09:04.439 14:08:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:04.439 14:08:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.439 14:08:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:04.439 14:08:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.439 14:08:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.005 14:08:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.005 14:08:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:05.005 14:08:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.264 Malloc0 00:09:05.264 14:08:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.522 Malloc1 00:09:05.522 14:08:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.522 14:08:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:05.780 /dev/nbd0 00:09:05.780 14:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:05.780 14:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.780 1+0 records in 00:09:05.780 1+0 records out 00:09:05.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302111 s, 13.6 MB/s 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.780 14:08:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:05.780 14:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.780 14:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.780 14:08:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:06.038 /dev/nbd1 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:06.295 1+0 records in 00:09:06.295 1+0 records out 00:09:06.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346648 s, 11.8 MB/s 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:06.295 14:08:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.295 14:08:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.553 14:08:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:06.553 { 00:09:06.553 "nbd_device": "/dev/nbd0", 00:09:06.553 "bdev_name": "Malloc0" 00:09:06.553 }, 00:09:06.553 { 00:09:06.553 "nbd_device": "/dev/nbd1", 00:09:06.553 "bdev_name": "Malloc1" 00:09:06.553 } 00:09:06.553 ]' 00:09:06.553 14:08:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:06.553 { 00:09:06.553 "nbd_device": "/dev/nbd0", 00:09:06.553 "bdev_name": "Malloc0" 00:09:06.553 }, 00:09:06.553 { 00:09:06.553 "nbd_device": "/dev/nbd1", 00:09:06.553 "bdev_name": "Malloc1" 00:09:06.553 } 00:09:06.553 ]' 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:06.554 /dev/nbd1' 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:06.554 /dev/nbd1' 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:06.554 256+0 records in 00:09:06.554 256+0 records out 00:09:06.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105951 s, 99.0 MB/s 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:06.554 256+0 records in 00:09:06.554 256+0 records out 00:09:06.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266137 s, 39.4 MB/s 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.554 14:08:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.554 256+0 records in 00:09:06.554 256+0 records out 00:09:06.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305432 s, 34.3 MB/s 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.554 14:08:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.812 14:08:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.378 14:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:07.636 14:08:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:07.636 14:08:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:08.201 14:08:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:09.135 [2024-11-27 14:08:39.505739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.135 [2024-11-27 14:08:39.638514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.135 [2024-11-27 14:08:39.638517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.392 [2024-11-27 14:08:39.831427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:09.392 [2024-11-27 14:08:39.831557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:11.291 spdk_app_start Round 2 00:09:11.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:11.291 14:08:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:11.291 14:08:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:11.291 14:08:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58428 ']' 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.291 14:08:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:11.291 14:08:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.549 Malloc0 00:09:11.549 14:08:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:12.115 Malloc1 00:09:12.115 14:08:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.115 14:08:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:12.373 /dev/nbd0 00:09:12.373 14:08:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:12.373 14:08:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.373 1+0 records in 00:09:12.373 1+0 records out 00:09:12.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657058 s, 6.2 MB/s 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.373 14:08:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:12.373 14:08:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.373 14:08:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.373 14:08:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:12.631 /dev/nbd1 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.631 1+0 records in 00:09:12.631 1+0 records out 00:09:12.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432557 s, 9.5 MB/s 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.631 14:08:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.631 14:08:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.889 { 00:09:12.889 "nbd_device": "/dev/nbd0", 00:09:12.889 "bdev_name": "Malloc0" 00:09:12.889 }, 00:09:12.889 { 00:09:12.889 "nbd_device": "/dev/nbd1", 00:09:12.889 "bdev_name": "Malloc1" 00:09:12.889 } 00:09:12.889 ]' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.889 { 00:09:12.889 "nbd_device": "/dev/nbd0", 00:09:12.889 "bdev_name": "Malloc0" 00:09:12.889 }, 00:09:12.889 { 00:09:12.889 "nbd_device": "/dev/nbd1", 00:09:12.889 "bdev_name": "Malloc1" 00:09:12.889 } 00:09:12.889 ]' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:12.889 /dev/nbd1' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:12.889 /dev/nbd1' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:12.889 256+0 records in 00:09:12.889 256+0 records out 00:09:12.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755962 s, 139 MB/s 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.889 256+0 records in 00:09:12.889 256+0 records out 00:09:12.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254755 s, 41.2 MB/s 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.889 14:08:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:13.155 256+0 records in 00:09:13.155 256+0 records out 00:09:13.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0570392 s, 18.4 MB/s 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.155 14:08:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.415 14:08:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:13.673 14:08:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.673 14:08:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:13.931 14:08:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:13.931 14:08:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:14.497 14:08:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:15.455 [2024-11-27 14:08:45.936435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.713 [2024-11-27 14:08:46.062260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.713 [2024-11-27 14:08:46.062274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.970 [2024-11-27 14:08:46.251314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:15.971 [2024-11-27 14:08:46.251411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:17.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:17.868 14:08:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:09:17.868 14:08:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58428 ']' 00:09:17.868 14:08:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:17.868 14:08:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.869 14:08:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:17.869 14:08:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.869 14:08:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:17.869 14:08:48 event.app_repeat -- event/event.sh@39 -- # killprocess 58428 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58428 ']' 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58428 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58428 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.869 killing process with pid 58428 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58428' 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58428 00:09:17.869 14:08:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58428 00:09:18.806 spdk_app_start is called in Round 0. 00:09:18.806 Shutdown signal received, stop current app iteration 00:09:18.806 Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 reinitialization... 00:09:18.806 spdk_app_start is called in Round 1. 00:09:18.806 Shutdown signal received, stop current app iteration 00:09:18.806 Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 reinitialization... 00:09:18.806 spdk_app_start is called in Round 2. 00:09:18.806 Shutdown signal received, stop current app iteration 00:09:18.806 Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 reinitialization... 00:09:18.806 spdk_app_start is called in Round 3. 00:09:18.806 Shutdown signal received, stop current app iteration 00:09:18.806 14:08:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:18.806 14:08:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:18.806 00:09:18.806 real 0m21.617s 00:09:18.806 user 0m47.774s 00:09:18.806 sys 0m3.042s 00:09:18.806 14:08:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.806 ************************************ 00:09:18.806 END TEST app_repeat 00:09:18.806 ************************************ 00:09:18.806 14:08:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:18.806 14:08:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:18.806 14:08:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:18.806 14:08:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.806 14:08:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.806 14:08:49 event -- common/autotest_common.sh@10 -- # set +x 00:09:18.806 ************************************ 00:09:18.806 START TEST cpu_locks 00:09:18.806 ************************************ 00:09:18.806 14:08:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:19.065 * Looking for test storage... 00:09:19.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:19.065 14:08:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.065 14:08:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.065 14:08:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.065 14:08:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.065 14:08:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:19.065 14:08:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.066 --rc genhtml_branch_coverage=1 00:09:19.066 --rc genhtml_function_coverage=1 00:09:19.066 --rc genhtml_legend=1 00:09:19.066 --rc geninfo_all_blocks=1 00:09:19.066 --rc geninfo_unexecuted_blocks=1 00:09:19.066 00:09:19.066 ' 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.066 --rc genhtml_branch_coverage=1 00:09:19.066 --rc genhtml_function_coverage=1 00:09:19.066 --rc genhtml_legend=1 00:09:19.066 --rc geninfo_all_blocks=1 00:09:19.066 --rc geninfo_unexecuted_blocks=1 00:09:19.066 00:09:19.066 ' 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.066 --rc genhtml_branch_coverage=1 00:09:19.066 --rc genhtml_function_coverage=1 00:09:19.066 --rc genhtml_legend=1 00:09:19.066 --rc geninfo_all_blocks=1 00:09:19.066 --rc geninfo_unexecuted_blocks=1 00:09:19.066 00:09:19.066 ' 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.066 --rc genhtml_branch_coverage=1 00:09:19.066 --rc genhtml_function_coverage=1 00:09:19.066 --rc genhtml_legend=1 00:09:19.066 --rc geninfo_all_blocks=1 00:09:19.066 --rc geninfo_unexecuted_blocks=1 00:09:19.066 00:09:19.066 ' 00:09:19.066 14:08:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:19.066 14:08:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:19.066 14:08:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:19.066 14:08:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.066 14:08:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.066 ************************************ 00:09:19.066 START TEST default_locks 00:09:19.066 ************************************ 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58899 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58899 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58899 ']' 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.066 14:08:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.066 [2024-11-27 14:08:49.559503] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:19.066 [2024-11-27 14:08:49.560368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58899 ] 00:09:19.323 [2024-11-27 14:08:49.736499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.581 [2024-11-27 14:08:49.869333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.514 14:08:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.514 14:08:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:20.514 14:08:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58899 00:09:20.514 14:08:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58899 00:09:20.514 14:08:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58899 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58899 ']' 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58899 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58899 00:09:20.772 killing process with pid 58899 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58899' 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58899 00:09:20.772 14:08:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58899 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58899 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58899 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:23.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.300 ERROR: process (pid: 58899) is no longer running 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58899 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58899 ']' 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58899) - No such process 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:23.300 00:09:23.300 real 0m3.896s 00:09:23.300 user 0m3.949s 00:09:23.300 sys 0m0.702s 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.300 14:08:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.300 ************************************ 00:09:23.300 END TEST default_locks 00:09:23.300 ************************************ 00:09:23.300 14:08:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:23.300 14:08:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.300 14:08:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.300 14:08:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.300 ************************************ 00:09:23.300 START TEST default_locks_via_rpc 00:09:23.300 ************************************ 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58974 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58974 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.300 14:08:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.300 [2024-11-27 14:08:53.523315] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:23.300 [2024-11-27 14:08:53.523503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:09:23.301 [2024-11-27 14:08:53.700301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.559 [2024-11-27 14:08:53.830495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.494 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.494 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.494 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:24.494 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58974 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.495 14:08:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58974 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58974 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58974 ']' 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58974 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58974 00:09:24.765 killing process with pid 58974 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58974' 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58974 00:09:24.765 14:08:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58974 00:09:27.293 00:09:27.293 real 0m3.871s 00:09:27.293 user 0m3.925s 00:09:27.293 sys 0m0.676s 00:09:27.293 ************************************ 00:09:27.293 END TEST default_locks_via_rpc 00:09:27.293 ************************************ 00:09:27.293 14:08:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.293 14:08:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.293 14:08:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:27.293 14:08:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.293 14:08:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.293 14:08:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.293 ************************************ 00:09:27.293 START TEST non_locking_app_on_locked_coremask 00:09:27.293 ************************************ 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:27.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59050 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59050 /var/tmp/spdk.sock 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59050 ']' 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.293 14:08:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.293 [2024-11-27 14:08:57.448753] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:27.293 [2024-11-27 14:08:57.448947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59050 ] 00:09:27.293 [2024-11-27 14:08:57.632886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.293 [2024-11-27 14:08:57.785813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59066 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59066 /var/tmp/spdk2.sock 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59066 ']' 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.227 14:08:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.485 [2024-11-27 14:08:58.750699] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:28.485 [2024-11-27 14:08:58.751147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:09:28.485 [2024-11-27 14:08:58.946501] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:28.485 [2024-11-27 14:08:58.946568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.743 [2024-11-27 14:08:59.216045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.271 14:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.271 14:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:31.271 14:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59050 00:09:31.271 14:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59050 00:09:31.271 14:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59050 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59050 ']' 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59050 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59050 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.839 killing process with pid 59050 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59050' 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59050 00:09:31.839 14:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59050 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59066 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59066 ']' 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59066 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59066 00:09:37.133 killing process with pid 59066 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59066' 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59066 00:09:37.133 14:09:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59066 00:09:38.507 ************************************ 00:09:38.507 END TEST non_locking_app_on_locked_coremask 00:09:38.508 ************************************ 00:09:38.508 00:09:38.508 real 0m11.687s 00:09:38.508 user 0m12.222s 00:09:38.508 sys 0m1.406s 00:09:38.508 14:09:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.508 14:09:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.765 14:09:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:38.765 14:09:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.765 14:09:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.765 14:09:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.765 ************************************ 00:09:38.765 START TEST locking_app_on_unlocked_coremask 00:09:38.765 ************************************ 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:38.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59215 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59215 /var/tmp/spdk.sock 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59215 ']' 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.765 14:09:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.765 [2024-11-27 14:09:09.218424] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:38.765 [2024-11-27 14:09:09.218802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ] 00:09:39.022 [2024-11-27 14:09:09.396506] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:39.022 [2024-11-27 14:09:09.396799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.022 [2024-11-27 14:09:09.527996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59237 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59237 /var/tmp/spdk2.sock 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59237 ']' 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:39.956 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:39.957 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:39.957 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.957 14:09:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.214 [2024-11-27 14:09:10.536347] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:40.214 [2024-11-27 14:09:10.536774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:09:40.472 [2024-11-27 14:09:10.744208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.730 [2024-11-27 14:09:11.008245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.261 14:09:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.261 14:09:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:43.262 14:09:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59237 00:09:43.262 14:09:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59237 00:09:43.262 14:09:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59215 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59215 ']' 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59215 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59215 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.826 killing process with pid 59215 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59215' 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59215 00:09:43.826 14:09:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59215 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59237 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59237 ']' 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59237 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59237 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.097 killing process with pid 59237 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59237' 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59237 00:09:49.097 14:09:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59237 00:09:50.527 00:09:50.527 real 0m11.895s 00:09:50.527 user 0m12.569s 00:09:50.527 sys 0m1.542s 00:09:50.527 14:09:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.527 ************************************ 00:09:50.527 END TEST locking_app_on_unlocked_coremask 00:09:50.527 ************************************ 00:09:50.527 14:09:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.527 14:09:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:50.527 14:09:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.527 14:09:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.527 14:09:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.527 ************************************ 00:09:50.527 START TEST locking_app_on_locked_coremask 00:09:50.527 ************************************ 00:09:50.527 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:50.527 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59385 00:09:50.527 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:50.527 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59385 /var/tmp/spdk.sock 00:09:50.528 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59385 ']' 00:09:50.528 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.528 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.528 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.528 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.528 14:09:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.787 [2024-11-27 14:09:21.138062] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:50.787 [2024-11-27 14:09:21.138280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:09:51.045 [2024-11-27 14:09:21.312177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.045 [2024-11-27 14:09:21.445921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.979 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59406 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59406 /var/tmp/spdk2.sock 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59406 /var/tmp/spdk2.sock 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59406 /var/tmp/spdk2.sock 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59406 ']' 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.980 14:09:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.980 [2024-11-27 14:09:22.456441] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:51.980 [2024-11-27 14:09:22.456618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59406 ] 00:09:52.237 [2024-11-27 14:09:22.659322] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59385 has claimed it. 00:09:52.237 [2024-11-27 14:09:22.659409] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:52.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59406) - No such process 00:09:52.806 ERROR: process (pid: 59406) is no longer running 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59385 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59385 00:09:52.806 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:53.065 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59385 00:09:53.065 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59385 ']' 00:09:53.065 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59385 00:09:53.065 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:53.065 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.065 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59385 00:09:53.323 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.323 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.323 killing process with pid 59385 00:09:53.323 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59385' 00:09:53.323 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59385 00:09:53.323 14:09:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59385 00:09:55.850 00:09:55.850 real 0m4.834s 00:09:55.850 user 0m5.161s 00:09:55.850 sys 0m0.885s 00:09:55.850 14:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.850 14:09:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.850 ************************************ 00:09:55.850 END TEST locking_app_on_locked_coremask 00:09:55.850 ************************************ 00:09:55.850 14:09:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:55.850 14:09:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.850 14:09:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.850 14:09:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.850 ************************************ 00:09:55.850 START TEST locking_overlapped_coremask 00:09:55.850 ************************************ 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59477 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59477 /var/tmp/spdk.sock 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59477 ']' 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.850 14:09:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.850 [2024-11-27 14:09:26.055036] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:55.850 [2024-11-27 14:09:26.055219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59477 ] 00:09:55.850 [2024-11-27 14:09:26.238197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.109 [2024-11-27 14:09:26.373295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.109 [2024-11-27 14:09:26.373376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.109 [2024-11-27 14:09:26.373391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59495 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59495 /var/tmp/spdk2.sock 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59495 /var/tmp/spdk2.sock 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59495 /var/tmp/spdk2.sock 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59495 ']' 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.045 14:09:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.045 [2024-11-27 14:09:27.379433] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:09:57.045 [2024-11-27 14:09:27.379666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59495 ] 00:09:57.303 [2024-11-27 14:09:27.588934] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59477 has claimed it. 00:09:57.303 [2024-11-27 14:09:27.589004] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:57.561 ERROR: process (pid: 59495) is no longer running 00:09:57.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59495) - No such process 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59477 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59477 ']' 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59477 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.561 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59477 00:09:57.820 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.820 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.820 killing process with pid 59477 00:09:57.820 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59477' 00:09:57.820 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59477 00:09:57.820 14:09:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59477 00:10:00.348 00:10:00.348 real 0m4.402s 00:10:00.348 user 0m11.956s 00:10:00.348 sys 0m0.710s 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 ************************************ 00:10:00.348 END TEST locking_overlapped_coremask 00:10:00.348 ************************************ 00:10:00.348 14:09:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:00.348 14:09:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.348 14:09:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.348 14:09:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 ************************************ 00:10:00.348 START TEST locking_overlapped_coremask_via_rpc 00:10:00.348 ************************************ 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59559 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59559 /var/tmp/spdk.sock 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59559 ']' 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.348 14:09:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 [2024-11-27 14:09:30.464287] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:00.348 [2024-11-27 14:09:30.464485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59559 ] 00:10:00.348 [2024-11-27 14:09:30.634962] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:00.348 [2024-11-27 14:09:30.635029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.348 [2024-11-27 14:09:30.769236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.348 [2024-11-27 14:09:30.769366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.348 [2024-11-27 14:09:30.769374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59588 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59588 /var/tmp/spdk2.sock 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59588 ']' 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:01.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.284 14:09:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.543 [2024-11-27 14:09:31.796321] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:01.543 [2024-11-27 14:09:31.796480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:10:01.543 [2024-11-27 14:09:31.992788] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:01.543 [2024-11-27 14:09:31.992863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.802 [2024-11-27 14:09:32.281101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.802 [2024-11-27 14:09:32.284914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.802 [2024-11-27 14:09:32.284916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.332 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.332 [2024-11-27 14:09:34.632037] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59559 has claimed it. 00:10:04.332 request: 00:10:04.332 { 00:10:04.332 "method": "framework_enable_cpumask_locks", 00:10:04.332 "req_id": 1 00:10:04.332 } 00:10:04.332 Got JSON-RPC error response 00:10:04.332 response: 00:10:04.333 { 00:10:04.333 "code": -32603, 00:10:04.333 "message": "Failed to claim CPU core: 2" 00:10:04.333 } 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59559 /var/tmp/spdk.sock 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59559 ']' 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.333 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59588 /var/tmp/spdk2.sock 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59588 ']' 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.591 14:09:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:04.849 00:10:04.849 real 0m4.903s 00:10:04.849 user 0m1.802s 00:10:04.849 sys 0m0.271s 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.849 14:09:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.849 ************************************ 00:10:04.849 END TEST locking_overlapped_coremask_via_rpc 00:10:04.849 ************************************ 00:10:04.849 14:09:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:04.849 14:09:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59559 ]] 00:10:04.849 14:09:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59559 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59559 ']' 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59559 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59559 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.849 killing process with pid 59559 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59559' 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59559 00:10:04.849 14:09:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59559 00:10:07.382 14:09:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59588 ]] 00:10:07.382 14:09:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59588 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59588 ']' 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59588 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59588 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:07.382 killing process with pid 59588 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59588' 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59588 00:10:07.382 14:09:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59588 00:10:09.914 14:09:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:09.914 14:09:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:09.914 14:09:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59559 ]] 00:10:09.914 14:09:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59559 00:10:09.914 14:09:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59559 ']' 00:10:09.914 14:09:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59559 00:10:09.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59559) - No such process 00:10:09.914 Process with pid 59559 is not found 00:10:09.914 14:09:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59559 is not found' 00:10:09.914 14:09:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59588 ]] 00:10:09.914 14:09:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59588 00:10:09.914 14:09:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59588 ']' 00:10:09.914 14:09:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59588 00:10:09.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59588) - No such process 00:10:09.915 Process with pid 59588 is not found 00:10:09.915 14:09:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59588 is not found' 00:10:09.915 14:09:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:09.915 00:10:09.915 real 0m50.699s 00:10:09.915 user 1m28.407s 00:10:09.915 sys 0m7.459s 00:10:09.915 14:09:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.915 14:09:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:09.915 ************************************ 00:10:09.915 END TEST cpu_locks 00:10:09.915 ************************************ 00:10:09.915 00:10:09.915 real 1m23.549s 00:10:09.915 user 2m33.749s 00:10:09.915 sys 0m11.622s 00:10:09.915 14:09:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.915 14:09:40 event -- common/autotest_common.sh@10 -- # set +x 00:10:09.915 ************************************ 00:10:09.915 END TEST event 00:10:09.915 ************************************ 00:10:09.915 14:09:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:09.915 14:09:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.915 14:09:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.915 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:10:09.915 ************************************ 00:10:09.915 START TEST thread 00:10:09.915 ************************************ 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:09.915 * Looking for test storage... 00:10:09.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.915 14:09:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.915 14:09:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.915 14:09:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.915 14:09:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.915 14:09:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.915 14:09:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.915 14:09:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.915 14:09:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.915 14:09:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.915 14:09:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.915 14:09:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.915 14:09:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:09.915 14:09:40 thread -- scripts/common.sh@345 -- # : 1 00:10:09.915 14:09:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.915 14:09:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.915 14:09:40 thread -- scripts/common.sh@365 -- # decimal 1 00:10:09.915 14:09:40 thread -- scripts/common.sh@353 -- # local d=1 00:10:09.915 14:09:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.915 14:09:40 thread -- scripts/common.sh@355 -- # echo 1 00:10:09.915 14:09:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.915 14:09:40 thread -- scripts/common.sh@366 -- # decimal 2 00:10:09.915 14:09:40 thread -- scripts/common.sh@353 -- # local d=2 00:10:09.915 14:09:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.915 14:09:40 thread -- scripts/common.sh@355 -- # echo 2 00:10:09.915 14:09:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.915 14:09:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.915 14:09:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.915 14:09:40 thread -- scripts/common.sh@368 -- # return 0 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.915 --rc genhtml_branch_coverage=1 00:10:09.915 --rc genhtml_function_coverage=1 00:10:09.915 --rc genhtml_legend=1 00:10:09.915 --rc geninfo_all_blocks=1 00:10:09.915 --rc geninfo_unexecuted_blocks=1 00:10:09.915 00:10:09.915 ' 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.915 --rc genhtml_branch_coverage=1 00:10:09.915 --rc genhtml_function_coverage=1 00:10:09.915 --rc genhtml_legend=1 00:10:09.915 --rc geninfo_all_blocks=1 00:10:09.915 --rc geninfo_unexecuted_blocks=1 00:10:09.915 00:10:09.915 ' 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.915 --rc genhtml_branch_coverage=1 00:10:09.915 --rc genhtml_function_coverage=1 00:10:09.915 --rc genhtml_legend=1 00:10:09.915 --rc geninfo_all_blocks=1 00:10:09.915 --rc geninfo_unexecuted_blocks=1 00:10:09.915 00:10:09.915 ' 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.915 --rc genhtml_branch_coverage=1 00:10:09.915 --rc genhtml_function_coverage=1 00:10:09.915 --rc genhtml_legend=1 00:10:09.915 --rc geninfo_all_blocks=1 00:10:09.915 --rc geninfo_unexecuted_blocks=1 00:10:09.915 00:10:09.915 ' 00:10:09.915 14:09:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.915 14:09:40 thread -- common/autotest_common.sh@10 -- # set +x 00:10:09.915 ************************************ 00:10:09.915 START TEST thread_poller_perf 00:10:09.915 ************************************ 00:10:09.915 14:09:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:09.915 [2024-11-27 14:09:40.277797] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:09.915 [2024-11-27 14:09:40.277996] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59783 ] 00:10:10.173 [2024-11-27 14:09:40.531047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.173 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:10.173 [2024-11-27 14:09:40.671979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.548 [2024-11-27T14:09:42.061Z] ====================================== 00:10:11.548 [2024-11-27T14:09:42.061Z] busy:2211793049 (cyc) 00:10:11.548 [2024-11-27T14:09:42.061Z] total_run_count: 262000 00:10:11.548 [2024-11-27T14:09:42.061Z] tsc_hz: 2200000000 (cyc) 00:10:11.548 [2024-11-27T14:09:42.061Z] ====================================== 00:10:11.548 [2024-11-27T14:09:42.061Z] poller_cost: 8441 (cyc), 3836 (nsec) 00:10:11.548 00:10:11.548 real 0m1.678s 00:10:11.548 user 0m1.478s 00:10:11.548 sys 0m0.091s 00:10:11.548 14:09:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.548 ************************************ 00:10:11.548 END TEST thread_poller_perf 00:10:11.548 14:09:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:11.548 ************************************ 00:10:11.548 14:09:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:11.548 14:09:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:11.548 14:09:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.548 14:09:41 thread -- common/autotest_common.sh@10 -- # set +x 00:10:11.548 ************************************ 00:10:11.548 START TEST thread_poller_perf 00:10:11.548 ************************************ 00:10:11.548 14:09:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:11.548 [2024-11-27 14:09:42.009875] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:11.548 [2024-11-27 14:09:42.010053] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:10:11.807 [2024-11-27 14:09:42.199605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.066 [2024-11-27 14:09:42.358104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.066 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:13.439 [2024-11-27T14:09:43.953Z] ====================================== 00:10:13.440 [2024-11-27T14:09:43.953Z] busy:2205061134 (cyc) 00:10:13.440 [2024-11-27T14:09:43.953Z] total_run_count: 3635000 00:10:13.440 [2024-11-27T14:09:43.953Z] tsc_hz: 2200000000 (cyc) 00:10:13.440 [2024-11-27T14:09:43.953Z] ====================================== 00:10:13.440 [2024-11-27T14:09:43.953Z] poller_cost: 606 (cyc), 275 (nsec) 00:10:13.440 00:10:13.440 real 0m1.658s 00:10:13.440 user 0m1.436s 00:10:13.440 sys 0m0.110s 00:10:13.440 14:09:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.440 14:09:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.440 ************************************ 00:10:13.440 END TEST thread_poller_perf 00:10:13.440 ************************************ 00:10:13.440 14:09:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:13.440 00:10:13.440 real 0m3.597s 00:10:13.440 user 0m3.051s 00:10:13.440 sys 0m0.327s 00:10:13.440 14:09:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.440 14:09:43 thread -- common/autotest_common.sh@10 -- # set +x 00:10:13.440 ************************************ 00:10:13.440 END TEST thread 00:10:13.440 ************************************ 00:10:13.440 14:09:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:13.440 14:09:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:13.440 14:09:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.440 14:09:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.440 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:10:13.440 ************************************ 00:10:13.440 START TEST app_cmdline 00:10:13.440 ************************************ 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:13.440 * Looking for test storage... 00:10:13.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.440 14:09:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.440 --rc genhtml_branch_coverage=1 00:10:13.440 --rc genhtml_function_coverage=1 00:10:13.440 --rc genhtml_legend=1 00:10:13.440 --rc geninfo_all_blocks=1 00:10:13.440 --rc geninfo_unexecuted_blocks=1 00:10:13.440 00:10:13.440 ' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.440 --rc genhtml_branch_coverage=1 00:10:13.440 --rc genhtml_function_coverage=1 00:10:13.440 --rc genhtml_legend=1 00:10:13.440 --rc geninfo_all_blocks=1 00:10:13.440 --rc geninfo_unexecuted_blocks=1 00:10:13.440 00:10:13.440 ' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.440 --rc genhtml_branch_coverage=1 00:10:13.440 --rc genhtml_function_coverage=1 00:10:13.440 --rc genhtml_legend=1 00:10:13.440 --rc geninfo_all_blocks=1 00:10:13.440 --rc geninfo_unexecuted_blocks=1 00:10:13.440 00:10:13.440 ' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.440 --rc genhtml_branch_coverage=1 00:10:13.440 --rc genhtml_function_coverage=1 00:10:13.440 --rc genhtml_legend=1 00:10:13.440 --rc geninfo_all_blocks=1 00:10:13.440 --rc geninfo_unexecuted_blocks=1 00:10:13.440 00:10:13.440 ' 00:10:13.440 14:09:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:13.440 14:09:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59905 00:10:13.440 14:09:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:13.440 14:09:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59905 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59905 ']' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.440 14:09:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 [2024-11-27 14:09:43.994719] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:13.698 [2024-11-27 14:09:43.994932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59905 ] 00:10:13.698 [2024-11-27 14:09:44.171644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.955 [2024-11-27 14:09:44.356047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.890 14:09:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.890 14:09:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:14.890 14:09:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:15.149 { 00:10:15.149 "version": "SPDK v25.01-pre git sha1 9094b9600", 00:10:15.149 "fields": { 00:10:15.149 "major": 25, 00:10:15.149 "minor": 1, 00:10:15.149 "patch": 0, 00:10:15.149 "suffix": "-pre", 00:10:15.149 "commit": "9094b9600" 00:10:15.149 } 00:10:15.149 } 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:15.149 14:09:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:15.149 14:09:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:15.408 request: 00:10:15.408 { 00:10:15.408 "method": "env_dpdk_get_mem_stats", 00:10:15.408 "req_id": 1 00:10:15.408 } 00:10:15.408 Got JSON-RPC error response 00:10:15.408 response: 00:10:15.408 { 00:10:15.408 "code": -32601, 00:10:15.408 "message": "Method not found" 00:10:15.408 } 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.408 14:09:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59905 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59905 ']' 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59905 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59905 00:10:15.408 killing process with pid 59905 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59905' 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59905 00:10:15.408 14:09:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59905 00:10:17.935 ************************************ 00:10:17.935 END TEST app_cmdline 00:10:17.935 ************************************ 00:10:17.935 00:10:17.935 real 0m4.417s 00:10:17.935 user 0m4.805s 00:10:17.935 sys 0m0.676s 00:10:17.935 14:09:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.935 14:09:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:17.935 14:09:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:17.935 14:09:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.935 14:09:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.935 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:10:17.935 ************************************ 00:10:17.935 START TEST version 00:10:17.935 ************************************ 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:17.935 * Looking for test storage... 00:10:17.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.935 14:09:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.935 14:09:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.935 14:09:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.935 14:09:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.935 14:09:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.935 14:09:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.935 14:09:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.935 14:09:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.935 14:09:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.935 14:09:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.935 14:09:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.935 14:09:48 version -- scripts/common.sh@344 -- # case "$op" in 00:10:17.935 14:09:48 version -- scripts/common.sh@345 -- # : 1 00:10:17.935 14:09:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.935 14:09:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.935 14:09:48 version -- scripts/common.sh@365 -- # decimal 1 00:10:17.935 14:09:48 version -- scripts/common.sh@353 -- # local d=1 00:10:17.935 14:09:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.935 14:09:48 version -- scripts/common.sh@355 -- # echo 1 00:10:17.935 14:09:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.935 14:09:48 version -- scripts/common.sh@366 -- # decimal 2 00:10:17.935 14:09:48 version -- scripts/common.sh@353 -- # local d=2 00:10:17.935 14:09:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.935 14:09:48 version -- scripts/common.sh@355 -- # echo 2 00:10:17.935 14:09:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.935 14:09:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.935 14:09:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.935 14:09:48 version -- scripts/common.sh@368 -- # return 0 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.935 --rc genhtml_branch_coverage=1 00:10:17.935 --rc genhtml_function_coverage=1 00:10:17.935 --rc genhtml_legend=1 00:10:17.935 --rc geninfo_all_blocks=1 00:10:17.935 --rc geninfo_unexecuted_blocks=1 00:10:17.935 00:10:17.935 ' 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.935 --rc genhtml_branch_coverage=1 00:10:17.935 --rc genhtml_function_coverage=1 00:10:17.935 --rc genhtml_legend=1 00:10:17.935 --rc geninfo_all_blocks=1 00:10:17.935 --rc geninfo_unexecuted_blocks=1 00:10:17.935 00:10:17.935 ' 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.935 --rc genhtml_branch_coverage=1 00:10:17.935 --rc genhtml_function_coverage=1 00:10:17.935 --rc genhtml_legend=1 00:10:17.935 --rc geninfo_all_blocks=1 00:10:17.935 --rc geninfo_unexecuted_blocks=1 00:10:17.935 00:10:17.935 ' 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.935 --rc genhtml_branch_coverage=1 00:10:17.935 --rc genhtml_function_coverage=1 00:10:17.935 --rc genhtml_legend=1 00:10:17.935 --rc geninfo_all_blocks=1 00:10:17.935 --rc geninfo_unexecuted_blocks=1 00:10:17.935 00:10:17.935 ' 00:10:17.935 14:09:48 version -- app/version.sh@17 -- # get_header_version major 00:10:17.935 14:09:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # cut -f2 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # tr -d '"' 00:10:17.935 14:09:48 version -- app/version.sh@17 -- # major=25 00:10:17.935 14:09:48 version -- app/version.sh@18 -- # get_header_version minor 00:10:17.935 14:09:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # cut -f2 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # tr -d '"' 00:10:17.935 14:09:48 version -- app/version.sh@18 -- # minor=1 00:10:17.935 14:09:48 version -- app/version.sh@19 -- # get_header_version patch 00:10:17.935 14:09:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # tr -d '"' 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # cut -f2 00:10:17.935 14:09:48 version -- app/version.sh@19 -- # patch=0 00:10:17.935 14:09:48 version -- app/version.sh@20 -- # get_header_version suffix 00:10:17.935 14:09:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # tr -d '"' 00:10:17.935 14:09:48 version -- app/version.sh@14 -- # cut -f2 00:10:17.935 14:09:48 version -- app/version.sh@20 -- # suffix=-pre 00:10:17.935 14:09:48 version -- app/version.sh@22 -- # version=25.1 00:10:17.935 14:09:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:17.935 14:09:48 version -- app/version.sh@28 -- # version=25.1rc0 00:10:17.935 14:09:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:17.935 14:09:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:17.935 14:09:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:17.935 14:09:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:17.935 00:10:17.935 real 0m0.261s 00:10:17.935 user 0m0.171s 00:10:17.935 sys 0m0.126s 00:10:17.935 ************************************ 00:10:17.935 END TEST version 00:10:17.935 ************************************ 00:10:17.935 14:09:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.935 14:09:48 version -- common/autotest_common.sh@10 -- # set +x 00:10:18.194 14:09:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:18.194 14:09:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:10:18.194 14:09:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:18.194 14:09:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.194 14:09:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.194 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.194 ************************************ 00:10:18.194 START TEST bdev_raid 00:10:18.194 ************************************ 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:18.194 * Looking for test storage... 00:10:18.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.194 14:09:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.194 --rc genhtml_branch_coverage=1 00:10:18.194 --rc genhtml_function_coverage=1 00:10:18.194 --rc genhtml_legend=1 00:10:18.194 --rc geninfo_all_blocks=1 00:10:18.194 --rc geninfo_unexecuted_blocks=1 00:10:18.194 00:10:18.194 ' 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.194 --rc genhtml_branch_coverage=1 00:10:18.194 --rc genhtml_function_coverage=1 00:10:18.194 --rc genhtml_legend=1 00:10:18.194 --rc geninfo_all_blocks=1 00:10:18.194 --rc geninfo_unexecuted_blocks=1 00:10:18.194 00:10:18.194 ' 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.194 --rc genhtml_branch_coverage=1 00:10:18.194 --rc genhtml_function_coverage=1 00:10:18.194 --rc genhtml_legend=1 00:10:18.194 --rc geninfo_all_blocks=1 00:10:18.194 --rc geninfo_unexecuted_blocks=1 00:10:18.194 00:10:18.194 ' 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.194 --rc genhtml_branch_coverage=1 00:10:18.194 --rc genhtml_function_coverage=1 00:10:18.194 --rc genhtml_legend=1 00:10:18.194 --rc geninfo_all_blocks=1 00:10:18.194 --rc geninfo_unexecuted_blocks=1 00:10:18.194 00:10:18.194 ' 00:10:18.194 14:09:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:18.194 14:09:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:18.194 14:09:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:10:18.194 14:09:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:10:18.194 14:09:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:10:18.194 14:09:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:10:18.194 14:09:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:10:18.194 14:09:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.195 14:09:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.195 14:09:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.195 ************************************ 00:10:18.195 START TEST raid1_resize_data_offset_test 00:10:18.195 ************************************ 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60098 00:10:18.195 Process raid pid: 60098 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60098' 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60098 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60098 ']' 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.195 14:09:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.453 [2024-11-27 14:09:48.777268] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:18.453 [2024-11-27 14:09:48.777451] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.711 [2024-11-27 14:09:48.968811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.711 [2024-11-27 14:09:49.124400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.968 [2024-11-27 14:09:49.341708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.968 [2024-11-27 14:09:49.341780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.225 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.225 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.225 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:10:19.225 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.225 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 malloc0 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 malloc1 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 null0 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 [2024-11-27 14:09:49.862524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:10:19.484 [2024-11-27 14:09:49.864991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:19.484 [2024-11-27 14:09:49.865072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:10:19.484 [2024-11-27 14:09:49.865306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:19.484 [2024-11-27 14:09:49.865329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:10:19.484 [2024-11-27 14:09:49.865683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:19.484 [2024-11-27 14:09:49.865928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:19.484 [2024-11-27 14:09:49.865948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:19.484 [2024-11-27 14:09:49.866153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.484 [2024-11-27 14:09:49.946586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.484 14:09:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.050 malloc2 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.050 [2024-11-27 14:09:50.496604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:20.050 [2024-11-27 14:09:50.513852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.050 [2024-11-27 14:09:50.516317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:20.050 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60098 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60098 ']' 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60098 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60098 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60098' 00:10:20.309 killing process with pid 60098 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60098 00:10:20.309 [2024-11-27 14:09:50.591014] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.309 14:09:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60098 00:10:20.310 [2024-11-27 14:09:50.591884] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:10:20.310 [2024-11-27 14:09:50.591960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.310 [2024-11-27 14:09:50.591985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:10:20.310 [2024-11-27 14:09:50.624678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.310 [2024-11-27 14:09:50.625122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.310 [2024-11-27 14:09:50.625148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:22.228 [2024-11-27 14:09:52.271049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.161 14:09:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:10:23.161 00:10:23.161 real 0m4.676s 00:10:23.161 user 0m4.598s 00:10:23.161 sys 0m0.613s 00:10:23.161 14:09:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.161 ************************************ 00:10:23.161 END TEST raid1_resize_data_offset_test 00:10:23.161 ************************************ 00:10:23.161 14:09:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.161 14:09:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:10:23.161 14:09:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.161 14:09:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.161 14:09:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.162 ************************************ 00:10:23.162 START TEST raid0_resize_superblock_test 00:10:23.162 ************************************ 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60182 00:10:23.162 Process raid pid: 60182 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60182' 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60182 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60182 ']' 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.162 14:09:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.162 [2024-11-27 14:09:53.490550] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:23.162 [2024-11-27 14:09:53.490713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.162 [2024-11-27 14:09:53.664501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.419 [2024-11-27 14:09:53.798082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.677 [2024-11-27 14:09:54.007237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.677 [2024-11-27 14:09:54.007298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.243 14:09:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.243 14:09:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.243 14:09:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:24.243 14:09:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.243 14:09:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.810 malloc0 00:10:24.810 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.810 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:24.810 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.810 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.810 [2024-11-27 14:09:55.181461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:24.810 [2024-11-27 14:09:55.181547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.810 [2024-11-27 14:09:55.181587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:24.810 [2024-11-27 14:09:55.181610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.810 [2024-11-27 14:09:55.184469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.810 [2024-11-27 14:09:55.184519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:24.810 pt0 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.811 5c57b0f9-f897-4540-a7a4-40457a4be57a 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.811 90360e2f-b1f7-413b-8fea-23789beebd69 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.811 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 0fc94520-9de8-4eda-a04c-bed6fc41bf4a 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 [2024-11-27 14:09:55.332568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 90360e2f-b1f7-413b-8fea-23789beebd69 is claimed 00:10:25.069 [2024-11-27 14:09:55.332705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0fc94520-9de8-4eda-a04c-bed6fc41bf4a is claimed 00:10:25.069 [2024-11-27 14:09:55.332928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:25.069 [2024-11-27 14:09:55.332955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:10:25.069 [2024-11-27 14:09:55.333322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:25.069 [2024-11-27 14:09:55.333591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:25.069 [2024-11-27 14:09:55.333609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:25.069 [2024-11-27 14:09:55.333811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:10:25.069 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.070 [2024-11-27 14:09:55.452947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.070 [2024-11-27 14:09:55.500946] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:25.070 [2024-11-27 14:09:55.500996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '90360e2f-b1f7-413b-8fea-23789beebd69' was resized: old size 131072, new size 204800 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.070 [2024-11-27 14:09:55.512902] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:25.070 [2024-11-27 14:09:55.512944] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0fc94520-9de8-4eda-a04c-bed6fc41bf4a' was resized: old size 131072, new size 204800 00:10:25.070 [2024-11-27 14:09:55.512991] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.070 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.405 [2024-11-27 14:09:55.628972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.405 [2024-11-27 14:09:55.680725] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:25.405 [2024-11-27 14:09:55.680854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:25.405 [2024-11-27 14:09:55.680882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.405 [2024-11-27 14:09:55.680905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:25.405 [2024-11-27 14:09:55.681065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.405 [2024-11-27 14:09:55.681118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.405 [2024-11-27 14:09:55.681139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.405 [2024-11-27 14:09:55.688594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:25.405 [2024-11-27 14:09:55.688673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.405 [2024-11-27 14:09:55.688704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:25.405 [2024-11-27 14:09:55.688723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.405 [2024-11-27 14:09:55.691759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.405 [2024-11-27 14:09:55.691849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:25.405 pt0 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:25.405 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.405 [2024-11-27 14:09:55.694316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 90360e2f-b1f7-413b-8fea-23789beebd69 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.406 [2024-11-27 14:09:55.694406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 90360e2f-b1f7-413b-8fea-23789beebd69 is claimed 00:10:25.406 [2024-11-27 14:09:55.694547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0fc94520-9de8-4eda-a04c-bed6fc41bf4a 00:10:25.406 [2024-11-27 14:09:55.694581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0fc94520-9de8-4eda-a04c-bed6fc41bf4a is claimed 00:10:25.406 [2024-11-27 14:09:55.694752] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0fc94520-9de8-4eda-a04c-bed6fc41bf4a (2) smaller than existing raid bdev Raid (3) 00:10:25.406 [2024-11-27 14:09:55.694788] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 90360e2f-b1f7-413b-8fea-23789beebd69: File exists 00:10:25.406 [2024-11-27 14:09:55.694868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:25.406 [2024-11-27 14:09:55.694889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:10:25.406 [2024-11-27 14:09:55.695224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:25.406 [2024-11-27 14:09:55.695439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:25.406 [2024-11-27 14:09:55.695455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:25.406 [2024-11-27 14:09:55.695663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.406 [2024-11-27 14:09:55.709029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60182 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60182 ']' 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60182 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60182 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.406 killing process with pid 60182 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60182' 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60182 00:10:25.406 [2024-11-27 14:09:55.789370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.406 14:09:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60182 00:10:25.406 [2024-11-27 14:09:55.789473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.406 [2024-11-27 14:09:55.789556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.406 [2024-11-27 14:09:55.789572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:26.783 [2024-11-27 14:09:57.131157] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.715 14:09:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:27.715 00:10:27.715 real 0m4.820s 00:10:27.715 user 0m5.262s 00:10:27.715 sys 0m0.647s 00:10:27.715 14:09:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.715 14:09:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.715 ************************************ 00:10:27.715 END TEST raid0_resize_superblock_test 00:10:27.715 ************************************ 00:10:27.974 14:09:58 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:10:27.974 14:09:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.974 14:09:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.974 14:09:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 ************************************ 00:10:27.974 START TEST raid1_resize_superblock_test 00:10:27.974 ************************************ 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60286 00:10:27.974 Process raid pid: 60286 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60286' 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60286 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60286 ']' 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.974 14:09:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 [2024-11-27 14:09:58.383598] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:27.974 [2024-11-27 14:09:58.383865] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.232 [2024-11-27 14:09:58.572612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.232 [2024-11-27 14:09:58.706709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.490 [2024-11-27 14:09:58.919525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.490 [2024-11-27 14:09:58.919583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.056 14:09:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.056 14:09:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.056 14:09:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:29.056 14:09:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.056 14:09:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.622 malloc0 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.622 [2024-11-27 14:10:00.004202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:29.622 [2024-11-27 14:10:00.004286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.622 [2024-11-27 14:10:00.004321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:29.622 [2024-11-27 14:10:00.004342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.622 [2024-11-27 14:10:00.007262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.622 [2024-11-27 14:10:00.007318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:29.622 pt0 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.622 1ca52604-64d6-44a3-8cf7-4352b85f6eda 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.622 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 ff5f568a-faa3-4713-8a2e-0822ab4962f4 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 15bc8acd-596f-4b4b-98d5-cb71797e2024 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 [2024-11-27 14:10:00.145201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ff5f568a-faa3-4713-8a2e-0822ab4962f4 is claimed 00:10:29.881 [2024-11-27 14:10:00.145343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 15bc8acd-596f-4b4b-98d5-cb71797e2024 is claimed 00:10:29.881 [2024-11-27 14:10:00.145559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:29.881 [2024-11-27 14:10:00.145585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:10:29.881 [2024-11-27 14:10:00.145965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:29.881 [2024-11-27 14:10:00.146261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:29.881 [2024-11-27 14:10:00.146285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:29.881 [2024-11-27 14:10:00.146493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:10:29.881 [2024-11-27 14:10:00.281551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 [2024-11-27 14:10:00.333668] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:29.881 [2024-11-27 14:10:00.333718] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ff5f568a-faa3-4713-8a2e-0822ab4962f4' was resized: old size 131072, new size 204800 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 [2024-11-27 14:10:00.341521] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:29.881 [2024-11-27 14:10:00.341561] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '15bc8acd-596f-4b4b-98d5-cb71797e2024' was resized: old size 131072, new size 204800 00:10:29.881 [2024-11-27 14:10:00.341614] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:10:30.140 [2024-11-27 14:10:00.457589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:10:30.140 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.141 [2024-11-27 14:10:00.509363] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:30.141 [2024-11-27 14:10:00.509479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:30.141 [2024-11-27 14:10:00.509518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:30.141 [2024-11-27 14:10:00.509731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.141 [2024-11-27 14:10:00.510087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.141 [2024-11-27 14:10:00.510205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.141 [2024-11-27 14:10:00.510229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.141 [2024-11-27 14:10:00.517228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:30.141 [2024-11-27 14:10:00.517303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.141 [2024-11-27 14:10:00.517332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:30.141 [2024-11-27 14:10:00.517353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.141 [2024-11-27 14:10:00.520303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.141 [2024-11-27 14:10:00.520355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:30.141 pt0 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.141 [2024-11-27 14:10:00.522850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ff5f568a-faa3-4713-8a2e-0822ab4962f4 00:10:30.141 [2024-11-27 14:10:00.522947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ff5f568a-faa3-4713-8a2e-0822ab4962f4 is claimed 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.141 [2024-11-27 14:10:00.523090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 15bc8acd-596f-4b4b-98d5-cb71797e2024 00:10:30.141 [2024-11-27 14:10:00.523124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 15bc8acd-596f-4b4b-98d5-cb71797e2024 is claimed 00:10:30.141 [2024-11-27 14:10:00.523286] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 15bc8acd-596f-4b4b-98d5-cb71797e2024 (2) smaller than existing raid bdev Raid (3) 00:10:30.141 [2024-11-27 14:10:00.523319] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ff5f568a-faa3-4713-8a2e-0822ab4962f4: File exists 00:10:30.141 [2024-11-27 14:10:00.523375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:30.141 [2024-11-27 14:10:00.523394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:30.141 [2024-11-27 14:10:00.523713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:30.141 [2024-11-27 14:10:00.523972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:30.141 [2024-11-27 14:10:00.523988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:30.141 [2024-11-27 14:10:00.524201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:10:30.141 [2024-11-27 14:10:00.537681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60286 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60286 ']' 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60286 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60286 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.141 killing process with pid 60286 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60286' 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60286 00:10:30.141 [2024-11-27 14:10:00.616807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.141 14:10:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60286 00:10:30.141 [2024-11-27 14:10:00.616923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.141 [2024-11-27 14:10:00.617005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.141 [2024-11-27 14:10:00.617021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:31.516 [2024-11-27 14:10:01.987728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.892 14:10:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:32.892 00:10:32.892 real 0m4.853s 00:10:32.892 user 0m5.213s 00:10:32.892 sys 0m0.675s 00:10:32.892 ************************************ 00:10:32.892 END TEST raid1_resize_superblock_test 00:10:32.892 ************************************ 00:10:32.892 14:10:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.892 14:10:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.892 14:10:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:10:32.892 14:10:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:10:32.892 14:10:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:10:32.892 14:10:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:10:32.892 14:10:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:10:32.892 14:10:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:10:32.892 14:10:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.892 14:10:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.892 14:10:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.893 ************************************ 00:10:32.893 START TEST raid_function_test_raid0 00:10:32.893 ************************************ 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60383 00:10:32.893 Process raid pid: 60383 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60383' 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60383 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60383 ']' 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.893 14:10:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:32.893 [2024-11-27 14:10:03.311006] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:32.893 [2024-11-27 14:10:03.311184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.151 [2024-11-27 14:10:03.499393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.410 [2024-11-27 14:10:03.665558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.410 [2024-11-27 14:10:03.904734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.410 [2024-11-27 14:10:03.904799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:33.977 Base_1 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.977 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:33.978 Base_2 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:33.978 [2024-11-27 14:10:04.479172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:33.978 [2024-11-27 14:10:04.481666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:33.978 [2024-11-27 14:10:04.481774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:33.978 [2024-11-27 14:10:04.481795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:33.978 [2024-11-27 14:10:04.482190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:33.978 [2024-11-27 14:10:04.482417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:33.978 [2024-11-27 14:10:04.482434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:33.978 [2024-11-27 14:10:04.482642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.978 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:34.236 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:34.494 [2024-11-27 14:10:04.847378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:34.494 /dev/nbd0 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.494 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.495 1+0 records in 00:10:34.495 1+0 records out 00:10:34.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503522 s, 8.1 MB/s 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:34.495 14:10:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:34.781 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:34.782 { 00:10:34.782 "nbd_device": "/dev/nbd0", 00:10:34.782 "bdev_name": "raid" 00:10:34.782 } 00:10:34.782 ]' 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:34.782 { 00:10:34.782 "nbd_device": "/dev/nbd0", 00:10:34.782 "bdev_name": "raid" 00:10:34.782 } 00:10:34.782 ]' 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:34.782 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:35.040 4096+0 records in 00:10:35.040 4096+0 records out 00:10:35.040 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0346182 s, 60.6 MB/s 00:10:35.040 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:35.299 4096+0 records in 00:10:35.299 4096+0 records out 00:10:35.299 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.355375 s, 5.9 MB/s 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:35.299 128+0 records in 00:10:35.299 128+0 records out 00:10:35.299 65536 bytes (66 kB, 64 KiB) copied, 0.00110905 s, 59.1 MB/s 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:35.299 2035+0 records in 00:10:35.299 2035+0 records out 00:10:35.299 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0163628 s, 63.7 MB/s 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:35.299 456+0 records in 00:10:35.299 456+0 records out 00:10:35.299 233472 bytes (233 kB, 228 KiB) copied, 0.00299908 s, 77.8 MB/s 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:35.299 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.300 14:10:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:35.867 [2024-11-27 14:10:06.123930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:35.867 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60383 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60383 ']' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60383 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60383 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.125 killing process with pid 60383 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60383' 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60383 00:10:36.125 14:10:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60383 00:10:36.125 [2024-11-27 14:10:06.576449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.125 [2024-11-27 14:10:06.576583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.125 [2024-11-27 14:10:06.576660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.125 [2024-11-27 14:10:06.576685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:36.384 [2024-11-27 14:10:06.773355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.790 14:10:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:10:37.790 00:10:37.790 real 0m4.681s 00:10:37.790 user 0m5.837s 00:10:37.790 sys 0m1.094s 00:10:37.790 ************************************ 00:10:37.790 END TEST raid_function_test_raid0 00:10:37.790 ************************************ 00:10:37.790 14:10:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.790 14:10:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:37.790 14:10:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:10:37.790 14:10:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.790 14:10:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.790 14:10:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.790 ************************************ 00:10:37.790 START TEST raid_function_test_concat 00:10:37.790 ************************************ 00:10:37.790 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:10:37.790 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:10:37.790 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:37.790 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:37.790 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60523 00:10:37.790 Process raid pid: 60523 00:10:37.790 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60523' 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60523 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60523 ']' 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.791 14:10:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:37.791 [2024-11-27 14:10:08.054236] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:37.791 [2024-11-27 14:10:08.054474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.791 [2024-11-27 14:10:08.250990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.049 [2024-11-27 14:10:08.429302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.307 [2024-11-27 14:10:08.680162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.307 [2024-11-27 14:10:08.680233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:38.873 Base_1 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:38.873 Base_2 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:38.873 [2024-11-27 14:10:09.232283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:38.873 [2024-11-27 14:10:09.234788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:38.873 [2024-11-27 14:10:09.234931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:38.873 [2024-11-27 14:10:09.234951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:38.873 [2024-11-27 14:10:09.235319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:38.873 [2024-11-27 14:10:09.235575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:38.873 [2024-11-27 14:10:09.235594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:38.873 [2024-11-27 14:10:09.235800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:38.873 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:39.438 [2024-11-27 14:10:09.664495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.438 /dev/nbd0 00:10:39.438 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:39.438 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:39.438 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.439 1+0 records in 00:10:39.439 1+0 records out 00:10:39.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400738 s, 10.2 MB/s 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:39.439 14:10:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:39.697 { 00:10:39.697 "nbd_device": "/dev/nbd0", 00:10:39.697 "bdev_name": "raid" 00:10:39.697 } 00:10:39.697 ]' 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:39.697 { 00:10:39.697 "nbd_device": "/dev/nbd0", 00:10:39.697 "bdev_name": "raid" 00:10:39.697 } 00:10:39.697 ]' 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:39.697 4096+0 records in 00:10:39.697 4096+0 records out 00:10:39.697 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0327458 s, 64.0 MB/s 00:10:39.697 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:40.263 4096+0 records in 00:10:40.263 4096+0 records out 00:10:40.263 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.351804 s, 6.0 MB/s 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:40.263 128+0 records in 00:10:40.263 128+0 records out 00:10:40.263 65536 bytes (66 kB, 64 KiB) copied, 0.000580176 s, 113 MB/s 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:40.263 2035+0 records in 00:10:40.263 2035+0 records out 00:10:40.263 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00877498 s, 119 MB/s 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:40.263 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:40.264 456+0 records in 00:10:40.264 456+0 records out 00:10:40.264 233472 bytes (233 kB, 228 KiB) copied, 0.0019605 s, 119 MB/s 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.264 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.527 [2024-11-27 14:10:10.983724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:40.527 14:10:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:40.794 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:40.794 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:40.794 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60523 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60523 ']' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60523 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60523 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.053 killing process with pid 60523 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60523' 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60523 00:10:41.053 [2024-11-27 14:10:11.359868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.053 14:10:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60523 00:10:41.053 [2024-11-27 14:10:11.360018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.053 [2024-11-27 14:10:11.360089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.053 [2024-11-27 14:10:11.360110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:41.053 [2024-11-27 14:10:11.552263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.427 14:10:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:42.427 ************************************ 00:10:42.427 END TEST raid_function_test_concat 00:10:42.427 ************************************ 00:10:42.427 00:10:42.427 real 0m4.864s 00:10:42.427 user 0m6.011s 00:10:42.427 sys 0m1.146s 00:10:42.427 14:10:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.427 14:10:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:42.427 14:10:12 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:42.427 14:10:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.427 14:10:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.427 14:10:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.427 ************************************ 00:10:42.427 START TEST raid0_resize_test 00:10:42.427 ************************************ 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60658 00:10:42.427 Process raid pid: 60658 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60658' 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60658 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60658 ']' 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.427 14:10:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.685 [2024-11-27 14:10:12.962647] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:42.685 [2024-11-27 14:10:12.962910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.685 [2024-11-27 14:10:13.155865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.944 [2024-11-27 14:10:13.316789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.202 [2024-11-27 14:10:13.550606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.202 [2024-11-27 14:10:13.550683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.460 Base_1 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.460 Base_2 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.460 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.460 [2024-11-27 14:10:13.965582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:43.460 [2024-11-27 14:10:13.968094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:43.460 [2024-11-27 14:10:13.968176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:43.460 [2024-11-27 14:10:13.968197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:43.461 [2024-11-27 14:10:13.968568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:43.461 [2024-11-27 14:10:13.968840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:43.461 [2024-11-27 14:10:13.968867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:43.461 [2024-11-27 14:10:13.969067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.461 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.461 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:43.461 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.461 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.719 [2024-11-27 14:10:13.973590] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:43.719 [2024-11-27 14:10:13.973631] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:43.719 true 00:10:43.719 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.719 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:43.719 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.719 14:10:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.719 14:10:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:43.719 [2024-11-27 14:10:13.985890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.719 [2024-11-27 14:10:14.029653] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:43.719 [2024-11-27 14:10:14.029694] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:43.719 [2024-11-27 14:10:14.029738] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:43.719 true 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:43.719 [2024-11-27 14:10:14.041824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60658 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60658 ']' 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60658 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60658 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.719 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.720 killing process with pid 60658 00:10:43.720 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60658' 00:10:43.720 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60658 00:10:43.720 [2024-11-27 14:10:14.112980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.720 14:10:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60658 00:10:43.720 [2024-11-27 14:10:14.113096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.720 [2024-11-27 14:10:14.113167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.720 [2024-11-27 14:10:14.113183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:43.720 [2024-11-27 14:10:14.128863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.094 14:10:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:45.094 00:10:45.094 real 0m2.369s 00:10:45.094 user 0m2.537s 00:10:45.094 sys 0m0.455s 00:10:45.094 14:10:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.094 14:10:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 ************************************ 00:10:45.094 END TEST raid0_resize_test 00:10:45.094 ************************************ 00:10:45.094 14:10:15 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:45.094 14:10:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.094 14:10:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.094 14:10:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 ************************************ 00:10:45.094 START TEST raid1_resize_test 00:10:45.094 ************************************ 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:45.094 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60719 00:10:45.095 Process raid pid: 60719 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60719' 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60719 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60719 ']' 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.095 14:10:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.095 [2024-11-27 14:10:15.366106] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:45.095 [2024-11-27 14:10:15.366268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.095 [2024-11-27 14:10:15.545335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.353 [2024-11-27 14:10:15.685979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.612 [2024-11-27 14:10:15.903928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.612 [2024-11-27 14:10:15.903987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 Base_1 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 Base_2 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-11-27 14:10:16.425767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:46.178 [2024-11-27 14:10:16.428355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:46.178 [2024-11-27 14:10:16.428443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:46.178 [2024-11-27 14:10:16.428464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:46.178 [2024-11-27 14:10:16.428792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:46.178 [2024-11-27 14:10:16.429014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:46.178 [2024-11-27 14:10:16.429038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:46.178 [2024-11-27 14:10:16.429215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-11-27 14:10:16.433757] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:46.178 [2024-11-27 14:10:16.433809] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:46.178 true 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-11-27 14:10:16.446021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-11-27 14:10:16.493756] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:46.178 [2024-11-27 14:10:16.493787] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:46.178 [2024-11-27 14:10:16.493849] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:46.178 true 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-11-27 14:10:16.509999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60719 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60719 ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60719 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60719 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60719' 00:10:46.178 killing process with pid 60719 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60719 00:10:46.178 [2024-11-27 14:10:16.578152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.178 14:10:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60719 00:10:46.178 [2024-11-27 14:10:16.578260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.178 [2024-11-27 14:10:16.578912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.178 [2024-11-27 14:10:16.578944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:46.178 [2024-11-27 14:10:16.594527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.564 14:10:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:47.564 00:10:47.564 real 0m2.443s 00:10:47.564 user 0m2.709s 00:10:47.564 sys 0m0.404s 00:10:47.564 14:10:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.564 ************************************ 00:10:47.564 END TEST raid1_resize_test 00:10:47.564 ************************************ 00:10:47.564 14:10:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.564 14:10:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:47.564 14:10:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:47.564 14:10:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:47.564 14:10:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.564 14:10:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.564 14:10:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.564 ************************************ 00:10:47.564 START TEST raid_state_function_test 00:10:47.564 ************************************ 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:47.564 Process raid pid: 60782 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60782 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60782' 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60782 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60782 ']' 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.564 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.564 [2024-11-27 14:10:17.890004] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:47.564 [2024-11-27 14:10:17.890182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.564 [2024-11-27 14:10:18.071971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.824 [2024-11-27 14:10:18.227095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.083 [2024-11-27 14:10:18.486947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.083 [2024-11-27 14:10:18.487360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 [2024-11-27 14:10:18.939531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.655 [2024-11-27 14:10:18.939610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.655 [2024-11-27 14:10:18.939627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.655 [2024-11-27 14:10:18.939643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.655 "name": "Existed_Raid", 00:10:48.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.655 "strip_size_kb": 64, 00:10:48.655 "state": "configuring", 00:10:48.655 "raid_level": "raid0", 00:10:48.655 "superblock": false, 00:10:48.655 "num_base_bdevs": 2, 00:10:48.655 "num_base_bdevs_discovered": 0, 00:10:48.655 "num_base_bdevs_operational": 2, 00:10:48.655 "base_bdevs_list": [ 00:10:48.655 { 00:10:48.655 "name": "BaseBdev1", 00:10:48.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.655 "is_configured": false, 00:10:48.655 "data_offset": 0, 00:10:48.655 "data_size": 0 00:10:48.655 }, 00:10:48.655 { 00:10:48.655 "name": "BaseBdev2", 00:10:48.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.655 "is_configured": false, 00:10:48.655 "data_offset": 0, 00:10:48.655 "data_size": 0 00:10:48.655 } 00:10:48.655 ] 00:10:48.655 }' 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.655 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.238 [2024-11-27 14:10:19.459672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.238 [2024-11-27 14:10:19.459717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.238 [2024-11-27 14:10:19.471657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.238 [2024-11-27 14:10:19.471850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.238 [2024-11-27 14:10:19.471976] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.238 [2024-11-27 14:10:19.472026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.238 [2024-11-27 14:10:19.523640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.238 BaseBdev1 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:49.238 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.239 [ 00:10:49.239 { 00:10:49.239 "name": "BaseBdev1", 00:10:49.239 "aliases": [ 00:10:49.239 "2375f01b-582c-4fba-890b-9acb179f3015" 00:10:49.239 ], 00:10:49.239 "product_name": "Malloc disk", 00:10:49.239 "block_size": 512, 00:10:49.239 "num_blocks": 65536, 00:10:49.239 "uuid": "2375f01b-582c-4fba-890b-9acb179f3015", 00:10:49.239 "assigned_rate_limits": { 00:10:49.239 "rw_ios_per_sec": 0, 00:10:49.239 "rw_mbytes_per_sec": 0, 00:10:49.239 "r_mbytes_per_sec": 0, 00:10:49.239 "w_mbytes_per_sec": 0 00:10:49.239 }, 00:10:49.239 "claimed": true, 00:10:49.239 "claim_type": "exclusive_write", 00:10:49.239 "zoned": false, 00:10:49.239 "supported_io_types": { 00:10:49.239 "read": true, 00:10:49.239 "write": true, 00:10:49.239 "unmap": true, 00:10:49.239 "flush": true, 00:10:49.239 "reset": true, 00:10:49.239 "nvme_admin": false, 00:10:49.239 "nvme_io": false, 00:10:49.239 "nvme_io_md": false, 00:10:49.239 "write_zeroes": true, 00:10:49.239 "zcopy": true, 00:10:49.239 "get_zone_info": false, 00:10:49.239 "zone_management": false, 00:10:49.239 "zone_append": false, 00:10:49.239 "compare": false, 00:10:49.239 "compare_and_write": false, 00:10:49.239 "abort": true, 00:10:49.239 "seek_hole": false, 00:10:49.239 "seek_data": false, 00:10:49.239 "copy": true, 00:10:49.239 "nvme_iov_md": false 00:10:49.239 }, 00:10:49.239 "memory_domains": [ 00:10:49.239 { 00:10:49.239 "dma_device_id": "system", 00:10:49.239 "dma_device_type": 1 00:10:49.239 }, 00:10:49.239 { 00:10:49.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.239 "dma_device_type": 2 00:10:49.239 } 00:10:49.239 ], 00:10:49.239 "driver_specific": {} 00:10:49.239 } 00:10:49.239 ] 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.239 "name": "Existed_Raid", 00:10:49.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.239 "strip_size_kb": 64, 00:10:49.239 "state": "configuring", 00:10:49.239 "raid_level": "raid0", 00:10:49.239 "superblock": false, 00:10:49.239 "num_base_bdevs": 2, 00:10:49.239 "num_base_bdevs_discovered": 1, 00:10:49.239 "num_base_bdevs_operational": 2, 00:10:49.239 "base_bdevs_list": [ 00:10:49.239 { 00:10:49.239 "name": "BaseBdev1", 00:10:49.239 "uuid": "2375f01b-582c-4fba-890b-9acb179f3015", 00:10:49.239 "is_configured": true, 00:10:49.239 "data_offset": 0, 00:10:49.239 "data_size": 65536 00:10:49.239 }, 00:10:49.239 { 00:10:49.239 "name": "BaseBdev2", 00:10:49.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.239 "is_configured": false, 00:10:49.239 "data_offset": 0, 00:10:49.239 "data_size": 0 00:10:49.239 } 00:10:49.239 ] 00:10:49.239 }' 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.239 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.805 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.805 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.805 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.806 [2024-11-27 14:10:20.043842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.806 [2024-11-27 14:10:20.043919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.806 [2024-11-27 14:10:20.051902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.806 [2024-11-27 14:10:20.054466] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.806 [2024-11-27 14:10:20.055567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.806 "name": "Existed_Raid", 00:10:49.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.806 "strip_size_kb": 64, 00:10:49.806 "state": "configuring", 00:10:49.806 "raid_level": "raid0", 00:10:49.806 "superblock": false, 00:10:49.806 "num_base_bdevs": 2, 00:10:49.806 "num_base_bdevs_discovered": 1, 00:10:49.806 "num_base_bdevs_operational": 2, 00:10:49.806 "base_bdevs_list": [ 00:10:49.806 { 00:10:49.806 "name": "BaseBdev1", 00:10:49.806 "uuid": "2375f01b-582c-4fba-890b-9acb179f3015", 00:10:49.806 "is_configured": true, 00:10:49.806 "data_offset": 0, 00:10:49.806 "data_size": 65536 00:10:49.806 }, 00:10:49.806 { 00:10:49.806 "name": "BaseBdev2", 00:10:49.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.806 "is_configured": false, 00:10:49.806 "data_offset": 0, 00:10:49.806 "data_size": 0 00:10:49.806 } 00:10:49.806 ] 00:10:49.806 }' 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.806 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.064 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.064 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.064 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.322 [2024-11-27 14:10:20.599221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.322 [2024-11-27 14:10:20.599501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.322 [2024-11-27 14:10:20.599528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:50.322 [2024-11-27 14:10:20.599897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:50.322 [2024-11-27 14:10:20.600148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.322 [2024-11-27 14:10:20.600170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:50.322 [2024-11-27 14:10:20.600501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.322 BaseBdev2 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.322 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.322 [ 00:10:50.322 { 00:10:50.322 "name": "BaseBdev2", 00:10:50.322 "aliases": [ 00:10:50.322 "4d471f76-1949-4d35-9e7a-c713bc74289b" 00:10:50.322 ], 00:10:50.322 "product_name": "Malloc disk", 00:10:50.322 "block_size": 512, 00:10:50.323 "num_blocks": 65536, 00:10:50.323 "uuid": "4d471f76-1949-4d35-9e7a-c713bc74289b", 00:10:50.323 "assigned_rate_limits": { 00:10:50.323 "rw_ios_per_sec": 0, 00:10:50.323 "rw_mbytes_per_sec": 0, 00:10:50.323 "r_mbytes_per_sec": 0, 00:10:50.323 "w_mbytes_per_sec": 0 00:10:50.323 }, 00:10:50.323 "claimed": true, 00:10:50.323 "claim_type": "exclusive_write", 00:10:50.323 "zoned": false, 00:10:50.323 "supported_io_types": { 00:10:50.323 "read": true, 00:10:50.323 "write": true, 00:10:50.323 "unmap": true, 00:10:50.323 "flush": true, 00:10:50.323 "reset": true, 00:10:50.323 "nvme_admin": false, 00:10:50.323 "nvme_io": false, 00:10:50.323 "nvme_io_md": false, 00:10:50.323 "write_zeroes": true, 00:10:50.323 "zcopy": true, 00:10:50.323 "get_zone_info": false, 00:10:50.323 "zone_management": false, 00:10:50.323 "zone_append": false, 00:10:50.323 "compare": false, 00:10:50.323 "compare_and_write": false, 00:10:50.323 "abort": true, 00:10:50.323 "seek_hole": false, 00:10:50.323 "seek_data": false, 00:10:50.323 "copy": true, 00:10:50.323 "nvme_iov_md": false 00:10:50.323 }, 00:10:50.323 "memory_domains": [ 00:10:50.323 { 00:10:50.323 "dma_device_id": "system", 00:10:50.323 "dma_device_type": 1 00:10:50.323 }, 00:10:50.323 { 00:10:50.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.323 "dma_device_type": 2 00:10:50.323 } 00:10:50.323 ], 00:10:50.323 "driver_specific": {} 00:10:50.323 } 00:10:50.323 ] 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.323 "name": "Existed_Raid", 00:10:50.323 "uuid": "bf6b49cb-3a65-4784-b73f-13bc9fe3159f", 00:10:50.323 "strip_size_kb": 64, 00:10:50.323 "state": "online", 00:10:50.323 "raid_level": "raid0", 00:10:50.323 "superblock": false, 00:10:50.323 "num_base_bdevs": 2, 00:10:50.323 "num_base_bdevs_discovered": 2, 00:10:50.323 "num_base_bdevs_operational": 2, 00:10:50.323 "base_bdevs_list": [ 00:10:50.323 { 00:10:50.323 "name": "BaseBdev1", 00:10:50.323 "uuid": "2375f01b-582c-4fba-890b-9acb179f3015", 00:10:50.323 "is_configured": true, 00:10:50.323 "data_offset": 0, 00:10:50.323 "data_size": 65536 00:10:50.323 }, 00:10:50.323 { 00:10:50.323 "name": "BaseBdev2", 00:10:50.323 "uuid": "4d471f76-1949-4d35-9e7a-c713bc74289b", 00:10:50.323 "is_configured": true, 00:10:50.323 "data_offset": 0, 00:10:50.323 "data_size": 65536 00:10:50.323 } 00:10:50.323 ] 00:10:50.323 }' 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.323 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.888 [2024-11-27 14:10:21.131890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.888 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.888 "name": "Existed_Raid", 00:10:50.888 "aliases": [ 00:10:50.888 "bf6b49cb-3a65-4784-b73f-13bc9fe3159f" 00:10:50.888 ], 00:10:50.888 "product_name": "Raid Volume", 00:10:50.888 "block_size": 512, 00:10:50.888 "num_blocks": 131072, 00:10:50.888 "uuid": "bf6b49cb-3a65-4784-b73f-13bc9fe3159f", 00:10:50.888 "assigned_rate_limits": { 00:10:50.888 "rw_ios_per_sec": 0, 00:10:50.888 "rw_mbytes_per_sec": 0, 00:10:50.888 "r_mbytes_per_sec": 0, 00:10:50.888 "w_mbytes_per_sec": 0 00:10:50.888 }, 00:10:50.888 "claimed": false, 00:10:50.888 "zoned": false, 00:10:50.888 "supported_io_types": { 00:10:50.888 "read": true, 00:10:50.888 "write": true, 00:10:50.888 "unmap": true, 00:10:50.888 "flush": true, 00:10:50.888 "reset": true, 00:10:50.888 "nvme_admin": false, 00:10:50.888 "nvme_io": false, 00:10:50.888 "nvme_io_md": false, 00:10:50.888 "write_zeroes": true, 00:10:50.888 "zcopy": false, 00:10:50.888 "get_zone_info": false, 00:10:50.888 "zone_management": false, 00:10:50.888 "zone_append": false, 00:10:50.888 "compare": false, 00:10:50.888 "compare_and_write": false, 00:10:50.888 "abort": false, 00:10:50.888 "seek_hole": false, 00:10:50.888 "seek_data": false, 00:10:50.888 "copy": false, 00:10:50.888 "nvme_iov_md": false 00:10:50.888 }, 00:10:50.888 "memory_domains": [ 00:10:50.888 { 00:10:50.888 "dma_device_id": "system", 00:10:50.888 "dma_device_type": 1 00:10:50.888 }, 00:10:50.888 { 00:10:50.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.888 "dma_device_type": 2 00:10:50.888 }, 00:10:50.888 { 00:10:50.888 "dma_device_id": "system", 00:10:50.888 "dma_device_type": 1 00:10:50.888 }, 00:10:50.888 { 00:10:50.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.888 "dma_device_type": 2 00:10:50.888 } 00:10:50.888 ], 00:10:50.888 "driver_specific": { 00:10:50.888 "raid": { 00:10:50.888 "uuid": "bf6b49cb-3a65-4784-b73f-13bc9fe3159f", 00:10:50.888 "strip_size_kb": 64, 00:10:50.888 "state": "online", 00:10:50.888 "raid_level": "raid0", 00:10:50.889 "superblock": false, 00:10:50.889 "num_base_bdevs": 2, 00:10:50.889 "num_base_bdevs_discovered": 2, 00:10:50.889 "num_base_bdevs_operational": 2, 00:10:50.889 "base_bdevs_list": [ 00:10:50.889 { 00:10:50.889 "name": "BaseBdev1", 00:10:50.889 "uuid": "2375f01b-582c-4fba-890b-9acb179f3015", 00:10:50.889 "is_configured": true, 00:10:50.889 "data_offset": 0, 00:10:50.889 "data_size": 65536 00:10:50.889 }, 00:10:50.889 { 00:10:50.889 "name": "BaseBdev2", 00:10:50.889 "uuid": "4d471f76-1949-4d35-9e7a-c713bc74289b", 00:10:50.889 "is_configured": true, 00:10:50.889 "data_offset": 0, 00:10:50.889 "data_size": 65536 00:10:50.889 } 00:10:50.889 ] 00:10:50.889 } 00:10:50.889 } 00:10:50.889 }' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.889 BaseBdev2' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.889 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.889 [2024-11-27 14:10:21.391573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.889 [2024-11-27 14:10:21.391629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.889 [2024-11-27 14:10:21.391712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.146 "name": "Existed_Raid", 00:10:51.146 "uuid": "bf6b49cb-3a65-4784-b73f-13bc9fe3159f", 00:10:51.146 "strip_size_kb": 64, 00:10:51.146 "state": "offline", 00:10:51.146 "raid_level": "raid0", 00:10:51.146 "superblock": false, 00:10:51.146 "num_base_bdevs": 2, 00:10:51.146 "num_base_bdevs_discovered": 1, 00:10:51.146 "num_base_bdevs_operational": 1, 00:10:51.146 "base_bdevs_list": [ 00:10:51.146 { 00:10:51.146 "name": null, 00:10:51.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.146 "is_configured": false, 00:10:51.146 "data_offset": 0, 00:10:51.146 "data_size": 65536 00:10:51.146 }, 00:10:51.146 { 00:10:51.146 "name": "BaseBdev2", 00:10:51.146 "uuid": "4d471f76-1949-4d35-9e7a-c713bc74289b", 00:10:51.146 "is_configured": true, 00:10:51.146 "data_offset": 0, 00:10:51.146 "data_size": 65536 00:10:51.146 } 00:10:51.146 ] 00:10:51.146 }' 00:10:51.146 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.147 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.712 [2024-11-27 14:10:22.080045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.712 [2024-11-27 14:10:22.080129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.712 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60782 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60782 ']' 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60782 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60782 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.970 killing process with pid 60782 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60782' 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60782 00:10:51.970 [2024-11-27 14:10:22.264637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.970 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60782 00:10:51.970 [2024-11-27 14:10:22.280917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.906 14:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.906 00:10:52.906 real 0m5.586s 00:10:52.906 user 0m8.387s 00:10:52.906 sys 0m0.795s 00:10:52.906 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.906 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.906 ************************************ 00:10:52.906 END TEST raid_state_function_test 00:10:52.906 ************************************ 00:10:52.906 14:10:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:52.906 14:10:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.906 14:10:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.906 14:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.163 ************************************ 00:10:53.163 START TEST raid_state_function_test_sb 00:10:53.163 ************************************ 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.163 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.164 Process raid pid: 61035 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61035 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61035' 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61035 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61035 ']' 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.164 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.164 [2024-11-27 14:10:23.542941] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:53.164 [2024-11-27 14:10:23.543297] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.422 [2024-11-27 14:10:23.732166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.422 [2024-11-27 14:10:23.872520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.679 [2024-11-27 14:10:24.093311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.680 [2024-11-27 14:10:24.093584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.245 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.245 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:54.245 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:54.245 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.245 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.245 [2024-11-27 14:10:24.533037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.246 [2024-11-27 14:10:24.533115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.246 [2024-11-27 14:10:24.533134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.246 [2024-11-27 14:10:24.533151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.246 "name": "Existed_Raid", 00:10:54.246 "uuid": "93223e7a-561a-4531-b43f-d8380a6645c4", 00:10:54.246 "strip_size_kb": 64, 00:10:54.246 "state": "configuring", 00:10:54.246 "raid_level": "raid0", 00:10:54.246 "superblock": true, 00:10:54.246 "num_base_bdevs": 2, 00:10:54.246 "num_base_bdevs_discovered": 0, 00:10:54.246 "num_base_bdevs_operational": 2, 00:10:54.246 "base_bdevs_list": [ 00:10:54.246 { 00:10:54.246 "name": "BaseBdev1", 00:10:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.246 "is_configured": false, 00:10:54.246 "data_offset": 0, 00:10:54.246 "data_size": 0 00:10:54.246 }, 00:10:54.246 { 00:10:54.246 "name": "BaseBdev2", 00:10:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.246 "is_configured": false, 00:10:54.246 "data_offset": 0, 00:10:54.246 "data_size": 0 00:10:54.246 } 00:10:54.246 ] 00:10:54.246 }' 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.246 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.809 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.809 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.809 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.809 [2024-11-27 14:10:25.029332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.809 [2024-11-27 14:10:25.029400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:54.809 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.809 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:54.809 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.810 [2024-11-27 14:10:25.037313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.810 [2024-11-27 14:10:25.037529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.810 [2024-11-27 14:10:25.037685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.810 [2024-11-27 14:10:25.037856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.810 [2024-11-27 14:10:25.091239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.810 BaseBdev1 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.810 [ 00:10:54.810 { 00:10:54.810 "name": "BaseBdev1", 00:10:54.810 "aliases": [ 00:10:54.810 "a96c1e51-9c31-4521-807c-85f2ffa54116" 00:10:54.810 ], 00:10:54.810 "product_name": "Malloc disk", 00:10:54.810 "block_size": 512, 00:10:54.810 "num_blocks": 65536, 00:10:54.810 "uuid": "a96c1e51-9c31-4521-807c-85f2ffa54116", 00:10:54.810 "assigned_rate_limits": { 00:10:54.810 "rw_ios_per_sec": 0, 00:10:54.810 "rw_mbytes_per_sec": 0, 00:10:54.810 "r_mbytes_per_sec": 0, 00:10:54.810 "w_mbytes_per_sec": 0 00:10:54.810 }, 00:10:54.810 "claimed": true, 00:10:54.810 "claim_type": "exclusive_write", 00:10:54.810 "zoned": false, 00:10:54.810 "supported_io_types": { 00:10:54.810 "read": true, 00:10:54.810 "write": true, 00:10:54.810 "unmap": true, 00:10:54.810 "flush": true, 00:10:54.810 "reset": true, 00:10:54.810 "nvme_admin": false, 00:10:54.810 "nvme_io": false, 00:10:54.810 "nvme_io_md": false, 00:10:54.810 "write_zeroes": true, 00:10:54.810 "zcopy": true, 00:10:54.810 "get_zone_info": false, 00:10:54.810 "zone_management": false, 00:10:54.810 "zone_append": false, 00:10:54.810 "compare": false, 00:10:54.810 "compare_and_write": false, 00:10:54.810 "abort": true, 00:10:54.810 "seek_hole": false, 00:10:54.810 "seek_data": false, 00:10:54.810 "copy": true, 00:10:54.810 "nvme_iov_md": false 00:10:54.810 }, 00:10:54.810 "memory_domains": [ 00:10:54.810 { 00:10:54.810 "dma_device_id": "system", 00:10:54.810 "dma_device_type": 1 00:10:54.810 }, 00:10:54.810 { 00:10:54.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.810 "dma_device_type": 2 00:10:54.810 } 00:10:54.810 ], 00:10:54.810 "driver_specific": {} 00:10:54.810 } 00:10:54.810 ] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.810 "name": "Existed_Raid", 00:10:54.810 "uuid": "aa53b066-322c-4ac7-b98d-e0665a47a82c", 00:10:54.810 "strip_size_kb": 64, 00:10:54.810 "state": "configuring", 00:10:54.810 "raid_level": "raid0", 00:10:54.810 "superblock": true, 00:10:54.810 "num_base_bdevs": 2, 00:10:54.810 "num_base_bdevs_discovered": 1, 00:10:54.810 "num_base_bdevs_operational": 2, 00:10:54.810 "base_bdevs_list": [ 00:10:54.810 { 00:10:54.810 "name": "BaseBdev1", 00:10:54.810 "uuid": "a96c1e51-9c31-4521-807c-85f2ffa54116", 00:10:54.810 "is_configured": true, 00:10:54.810 "data_offset": 2048, 00:10:54.810 "data_size": 63488 00:10:54.810 }, 00:10:54.810 { 00:10:54.810 "name": "BaseBdev2", 00:10:54.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.810 "is_configured": false, 00:10:54.810 "data_offset": 0, 00:10:54.810 "data_size": 0 00:10:54.810 } 00:10:54.810 ] 00:10:54.810 }' 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.810 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.376 [2024-11-27 14:10:25.699502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.376 [2024-11-27 14:10:25.699600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.376 [2024-11-27 14:10:25.707516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.376 [2024-11-27 14:10:25.710152] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.376 [2024-11-27 14:10:25.710417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.376 "name": "Existed_Raid", 00:10:55.376 "uuid": "e5f23690-af05-4914-9d66-c3b4f299000a", 00:10:55.376 "strip_size_kb": 64, 00:10:55.376 "state": "configuring", 00:10:55.376 "raid_level": "raid0", 00:10:55.376 "superblock": true, 00:10:55.376 "num_base_bdevs": 2, 00:10:55.376 "num_base_bdevs_discovered": 1, 00:10:55.376 "num_base_bdevs_operational": 2, 00:10:55.376 "base_bdevs_list": [ 00:10:55.376 { 00:10:55.376 "name": "BaseBdev1", 00:10:55.376 "uuid": "a96c1e51-9c31-4521-807c-85f2ffa54116", 00:10:55.376 "is_configured": true, 00:10:55.376 "data_offset": 2048, 00:10:55.376 "data_size": 63488 00:10:55.376 }, 00:10:55.376 { 00:10:55.376 "name": "BaseBdev2", 00:10:55.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.376 "is_configured": false, 00:10:55.376 "data_offset": 0, 00:10:55.376 "data_size": 0 00:10:55.376 } 00:10:55.376 ] 00:10:55.376 }' 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.376 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.940 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.941 [2024-11-27 14:10:26.269936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.941 [2024-11-27 14:10:26.270680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.941 [2024-11-27 14:10:26.270714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:55.941 BaseBdev2 00:10:55.941 [2024-11-27 14:10:26.271255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:55.941 [2024-11-27 14:10:26.271572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.941 [2024-11-27 14:10:26.271608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:55.941 [2024-11-27 14:10:26.271932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.941 [ 00:10:55.941 { 00:10:55.941 "name": "BaseBdev2", 00:10:55.941 "aliases": [ 00:10:55.941 "d8fa0410-977a-426e-b098-e450ed2af29e" 00:10:55.941 ], 00:10:55.941 "product_name": "Malloc disk", 00:10:55.941 "block_size": 512, 00:10:55.941 "num_blocks": 65536, 00:10:55.941 "uuid": "d8fa0410-977a-426e-b098-e450ed2af29e", 00:10:55.941 "assigned_rate_limits": { 00:10:55.941 "rw_ios_per_sec": 0, 00:10:55.941 "rw_mbytes_per_sec": 0, 00:10:55.941 "r_mbytes_per_sec": 0, 00:10:55.941 "w_mbytes_per_sec": 0 00:10:55.941 }, 00:10:55.941 "claimed": true, 00:10:55.941 "claim_type": "exclusive_write", 00:10:55.941 "zoned": false, 00:10:55.941 "supported_io_types": { 00:10:55.941 "read": true, 00:10:55.941 "write": true, 00:10:55.941 "unmap": true, 00:10:55.941 "flush": true, 00:10:55.941 "reset": true, 00:10:55.941 "nvme_admin": false, 00:10:55.941 "nvme_io": false, 00:10:55.941 "nvme_io_md": false, 00:10:55.941 "write_zeroes": true, 00:10:55.941 "zcopy": true, 00:10:55.941 "get_zone_info": false, 00:10:55.941 "zone_management": false, 00:10:55.941 "zone_append": false, 00:10:55.941 "compare": false, 00:10:55.941 "compare_and_write": false, 00:10:55.941 "abort": true, 00:10:55.941 "seek_hole": false, 00:10:55.941 "seek_data": false, 00:10:55.941 "copy": true, 00:10:55.941 "nvme_iov_md": false 00:10:55.941 }, 00:10:55.941 "memory_domains": [ 00:10:55.941 { 00:10:55.941 "dma_device_id": "system", 00:10:55.941 "dma_device_type": 1 00:10:55.941 }, 00:10:55.941 { 00:10:55.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.941 "dma_device_type": 2 00:10:55.941 } 00:10:55.941 ], 00:10:55.941 "driver_specific": {} 00:10:55.941 } 00:10:55.941 ] 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.941 "name": "Existed_Raid", 00:10:55.941 "uuid": "e5f23690-af05-4914-9d66-c3b4f299000a", 00:10:55.941 "strip_size_kb": 64, 00:10:55.941 "state": "online", 00:10:55.941 "raid_level": "raid0", 00:10:55.941 "superblock": true, 00:10:55.941 "num_base_bdevs": 2, 00:10:55.941 "num_base_bdevs_discovered": 2, 00:10:55.941 "num_base_bdevs_operational": 2, 00:10:55.941 "base_bdevs_list": [ 00:10:55.941 { 00:10:55.941 "name": "BaseBdev1", 00:10:55.941 "uuid": "a96c1e51-9c31-4521-807c-85f2ffa54116", 00:10:55.941 "is_configured": true, 00:10:55.941 "data_offset": 2048, 00:10:55.941 "data_size": 63488 00:10:55.941 }, 00:10:55.941 { 00:10:55.941 "name": "BaseBdev2", 00:10:55.941 "uuid": "d8fa0410-977a-426e-b098-e450ed2af29e", 00:10:55.941 "is_configured": true, 00:10:55.941 "data_offset": 2048, 00:10:55.941 "data_size": 63488 00:10:55.941 } 00:10:55.941 ] 00:10:55.941 }' 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.941 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.506 [2024-11-27 14:10:26.870789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.506 "name": "Existed_Raid", 00:10:56.506 "aliases": [ 00:10:56.506 "e5f23690-af05-4914-9d66-c3b4f299000a" 00:10:56.506 ], 00:10:56.506 "product_name": "Raid Volume", 00:10:56.506 "block_size": 512, 00:10:56.506 "num_blocks": 126976, 00:10:56.506 "uuid": "e5f23690-af05-4914-9d66-c3b4f299000a", 00:10:56.506 "assigned_rate_limits": { 00:10:56.506 "rw_ios_per_sec": 0, 00:10:56.506 "rw_mbytes_per_sec": 0, 00:10:56.506 "r_mbytes_per_sec": 0, 00:10:56.506 "w_mbytes_per_sec": 0 00:10:56.506 }, 00:10:56.506 "claimed": false, 00:10:56.506 "zoned": false, 00:10:56.506 "supported_io_types": { 00:10:56.506 "read": true, 00:10:56.506 "write": true, 00:10:56.506 "unmap": true, 00:10:56.506 "flush": true, 00:10:56.506 "reset": true, 00:10:56.506 "nvme_admin": false, 00:10:56.506 "nvme_io": false, 00:10:56.506 "nvme_io_md": false, 00:10:56.506 "write_zeroes": true, 00:10:56.506 "zcopy": false, 00:10:56.506 "get_zone_info": false, 00:10:56.506 "zone_management": false, 00:10:56.506 "zone_append": false, 00:10:56.506 "compare": false, 00:10:56.506 "compare_and_write": false, 00:10:56.506 "abort": false, 00:10:56.506 "seek_hole": false, 00:10:56.506 "seek_data": false, 00:10:56.506 "copy": false, 00:10:56.506 "nvme_iov_md": false 00:10:56.506 }, 00:10:56.506 "memory_domains": [ 00:10:56.506 { 00:10:56.506 "dma_device_id": "system", 00:10:56.506 "dma_device_type": 1 00:10:56.506 }, 00:10:56.506 { 00:10:56.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.506 "dma_device_type": 2 00:10:56.506 }, 00:10:56.506 { 00:10:56.506 "dma_device_id": "system", 00:10:56.506 "dma_device_type": 1 00:10:56.506 }, 00:10:56.506 { 00:10:56.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.506 "dma_device_type": 2 00:10:56.506 } 00:10:56.506 ], 00:10:56.506 "driver_specific": { 00:10:56.506 "raid": { 00:10:56.506 "uuid": "e5f23690-af05-4914-9d66-c3b4f299000a", 00:10:56.506 "strip_size_kb": 64, 00:10:56.506 "state": "online", 00:10:56.506 "raid_level": "raid0", 00:10:56.506 "superblock": true, 00:10:56.506 "num_base_bdevs": 2, 00:10:56.506 "num_base_bdevs_discovered": 2, 00:10:56.506 "num_base_bdevs_operational": 2, 00:10:56.506 "base_bdevs_list": [ 00:10:56.506 { 00:10:56.506 "name": "BaseBdev1", 00:10:56.506 "uuid": "a96c1e51-9c31-4521-807c-85f2ffa54116", 00:10:56.506 "is_configured": true, 00:10:56.506 "data_offset": 2048, 00:10:56.506 "data_size": 63488 00:10:56.506 }, 00:10:56.506 { 00:10:56.506 "name": "BaseBdev2", 00:10:56.506 "uuid": "d8fa0410-977a-426e-b098-e450ed2af29e", 00:10:56.506 "is_configured": true, 00:10:56.506 "data_offset": 2048, 00:10:56.506 "data_size": 63488 00:10:56.506 } 00:10:56.506 ] 00:10:56.506 } 00:10:56.506 } 00:10:56.506 }' 00:10:56.506 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.507 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.507 BaseBdev2' 00:10:56.507 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.507 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.507 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.507 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.507 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.507 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.507 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.764 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.765 [2024-11-27 14:10:27.126576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.765 [2024-11-27 14:10:27.126642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.765 [2024-11-27 14:10:27.126733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.765 "name": "Existed_Raid", 00:10:56.765 "uuid": "e5f23690-af05-4914-9d66-c3b4f299000a", 00:10:56.765 "strip_size_kb": 64, 00:10:56.765 "state": "offline", 00:10:56.765 "raid_level": "raid0", 00:10:56.765 "superblock": true, 00:10:56.765 "num_base_bdevs": 2, 00:10:56.765 "num_base_bdevs_discovered": 1, 00:10:56.765 "num_base_bdevs_operational": 1, 00:10:56.765 "base_bdevs_list": [ 00:10:56.765 { 00:10:56.765 "name": null, 00:10:56.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.765 "is_configured": false, 00:10:56.765 "data_offset": 0, 00:10:56.765 "data_size": 63488 00:10:56.765 }, 00:10:56.765 { 00:10:56.765 "name": "BaseBdev2", 00:10:56.765 "uuid": "d8fa0410-977a-426e-b098-e450ed2af29e", 00:10:56.765 "is_configured": true, 00:10:56.765 "data_offset": 2048, 00:10:56.765 "data_size": 63488 00:10:56.765 } 00:10:56.765 ] 00:10:56.765 }' 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.765 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.332 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.332 [2024-11-27 14:10:27.802354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.332 [2024-11-27 14:10:27.802683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61035 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61035 ']' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61035 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61035 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.590 killing process with pid 61035 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61035' 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61035 00:10:57.590 [2024-11-27 14:10:27.994863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.590 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61035 00:10:57.590 [2024-11-27 14:10:28.009685] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.964 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:58.964 00:10:58.964 real 0m5.638s 00:10:58.964 user 0m8.466s 00:10:58.964 sys 0m0.739s 00:10:58.964 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.964 ************************************ 00:10:58.964 END TEST raid_state_function_test_sb 00:10:58.964 ************************************ 00:10:58.964 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.964 14:10:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:58.964 14:10:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.964 14:10:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.964 14:10:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.964 ************************************ 00:10:58.964 START TEST raid_superblock_test 00:10:58.964 ************************************ 00:10:58.964 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:58.964 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61298 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61298 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61298 ']' 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.965 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.965 [2024-11-27 14:10:29.215188] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:10:58.965 [2024-11-27 14:10:29.215394] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61298 ] 00:10:58.965 [2024-11-27 14:10:29.392364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.222 [2024-11-27 14:10:29.528980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.481 [2024-11-27 14:10:29.736882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.481 [2024-11-27 14:10:29.736954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.739 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 malloc1 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 [2024-11-27 14:10:30.273072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.998 [2024-11-27 14:10:30.273336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.998 [2024-11-27 14:10:30.273416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:59.998 [2024-11-27 14:10:30.273695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.998 [2024-11-27 14:10:30.276654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.998 [2024-11-27 14:10:30.276864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.998 pt1 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 malloc2 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.998 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.998 [2024-11-27 14:10:30.331456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.998 [2024-11-27 14:10:30.331686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.998 [2024-11-27 14:10:30.331770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:59.998 [2024-11-27 14:10:30.332025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.998 [2024-11-27 14:10:30.334985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.999 [2024-11-27 14:10:30.335161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.999 pt2 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.999 [2024-11-27 14:10:30.343655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.999 [2024-11-27 14:10:30.346361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.999 [2024-11-27 14:10:30.346710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:59.999 [2024-11-27 14:10:30.346872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:59.999 [2024-11-27 14:10:30.347244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:59.999 [2024-11-27 14:10:30.347458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:59.999 [2024-11-27 14:10:30.347480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:59.999 [2024-11-27 14:10:30.347761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.999 "name": "raid_bdev1", 00:10:59.999 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:10:59.999 "strip_size_kb": 64, 00:10:59.999 "state": "online", 00:10:59.999 "raid_level": "raid0", 00:10:59.999 "superblock": true, 00:10:59.999 "num_base_bdevs": 2, 00:10:59.999 "num_base_bdevs_discovered": 2, 00:10:59.999 "num_base_bdevs_operational": 2, 00:10:59.999 "base_bdevs_list": [ 00:10:59.999 { 00:10:59.999 "name": "pt1", 00:10:59.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.999 "is_configured": true, 00:10:59.999 "data_offset": 2048, 00:10:59.999 "data_size": 63488 00:10:59.999 }, 00:10:59.999 { 00:10:59.999 "name": "pt2", 00:10:59.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.999 "is_configured": true, 00:10:59.999 "data_offset": 2048, 00:10:59.999 "data_size": 63488 00:10:59.999 } 00:10:59.999 ] 00:10:59.999 }' 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.999 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.566 [2024-11-27 14:10:30.868200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.566 "name": "raid_bdev1", 00:11:00.566 "aliases": [ 00:11:00.566 "e393cf8f-8b85-47eb-a9a4-aa9034bf2501" 00:11:00.566 ], 00:11:00.566 "product_name": "Raid Volume", 00:11:00.566 "block_size": 512, 00:11:00.566 "num_blocks": 126976, 00:11:00.566 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:11:00.566 "assigned_rate_limits": { 00:11:00.566 "rw_ios_per_sec": 0, 00:11:00.566 "rw_mbytes_per_sec": 0, 00:11:00.566 "r_mbytes_per_sec": 0, 00:11:00.566 "w_mbytes_per_sec": 0 00:11:00.566 }, 00:11:00.566 "claimed": false, 00:11:00.566 "zoned": false, 00:11:00.566 "supported_io_types": { 00:11:00.566 "read": true, 00:11:00.566 "write": true, 00:11:00.566 "unmap": true, 00:11:00.566 "flush": true, 00:11:00.566 "reset": true, 00:11:00.566 "nvme_admin": false, 00:11:00.566 "nvme_io": false, 00:11:00.566 "nvme_io_md": false, 00:11:00.566 "write_zeroes": true, 00:11:00.566 "zcopy": false, 00:11:00.566 "get_zone_info": false, 00:11:00.566 "zone_management": false, 00:11:00.566 "zone_append": false, 00:11:00.566 "compare": false, 00:11:00.566 "compare_and_write": false, 00:11:00.566 "abort": false, 00:11:00.566 "seek_hole": false, 00:11:00.566 "seek_data": false, 00:11:00.566 "copy": false, 00:11:00.566 "nvme_iov_md": false 00:11:00.566 }, 00:11:00.566 "memory_domains": [ 00:11:00.566 { 00:11:00.566 "dma_device_id": "system", 00:11:00.566 "dma_device_type": 1 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.566 "dma_device_type": 2 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "dma_device_id": "system", 00:11:00.566 "dma_device_type": 1 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.566 "dma_device_type": 2 00:11:00.566 } 00:11:00.566 ], 00:11:00.566 "driver_specific": { 00:11:00.566 "raid": { 00:11:00.566 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:11:00.566 "strip_size_kb": 64, 00:11:00.566 "state": "online", 00:11:00.566 "raid_level": "raid0", 00:11:00.566 "superblock": true, 00:11:00.566 "num_base_bdevs": 2, 00:11:00.566 "num_base_bdevs_discovered": 2, 00:11:00.566 "num_base_bdevs_operational": 2, 00:11:00.566 "base_bdevs_list": [ 00:11:00.566 { 00:11:00.566 "name": "pt1", 00:11:00.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.566 "is_configured": true, 00:11:00.566 "data_offset": 2048, 00:11:00.566 "data_size": 63488 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "name": "pt2", 00:11:00.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.566 "is_configured": true, 00:11:00.566 "data_offset": 2048, 00:11:00.566 "data_size": 63488 00:11:00.566 } 00:11:00.566 ] 00:11:00.566 } 00:11:00.566 } 00:11:00.566 }' 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.566 pt2' 00:11:00.566 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.566 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 [2024-11-27 14:10:31.144266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e393cf8f-8b85-47eb-a9a4-aa9034bf2501 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e393cf8f-8b85-47eb-a9a4-aa9034bf2501 ']' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 [2024-11-27 14:10:31.195873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.825 [2024-11-27 14:10:31.195901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.825 [2024-11-27 14:10:31.196005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.825 [2024-11-27 14:10:31.196072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.825 [2024-11-27 14:10:31.196092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.825 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.084 [2024-11-27 14:10:31.339950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:01.084 [2024-11-27 14:10:31.342613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:01.084 [2024-11-27 14:10:31.342717] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:01.084 [2024-11-27 14:10:31.342802] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:01.084 [2024-11-27 14:10:31.342859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.084 [2024-11-27 14:10:31.342882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:01.084 request: 00:11:01.084 { 00:11:01.084 "name": "raid_bdev1", 00:11:01.084 "raid_level": "raid0", 00:11:01.084 "base_bdevs": [ 00:11:01.084 "malloc1", 00:11:01.084 "malloc2" 00:11:01.084 ], 00:11:01.084 "strip_size_kb": 64, 00:11:01.084 "superblock": false, 00:11:01.084 "method": "bdev_raid_create", 00:11:01.084 "req_id": 1 00:11:01.084 } 00:11:01.084 Got JSON-RPC error response 00:11:01.084 response: 00:11:01.084 { 00:11:01.084 "code": -17, 00:11:01.084 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:01.084 } 00:11:01.084 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.085 [2024-11-27 14:10:31.407985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.085 [2024-11-27 14:10:31.408196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.085 [2024-11-27 14:10:31.408267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:01.085 [2024-11-27 14:10:31.408400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.085 [2024-11-27 14:10:31.411367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.085 [2024-11-27 14:10:31.411548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.085 [2024-11-27 14:10:31.411775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:01.085 [2024-11-27 14:10:31.411983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.085 pt1 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.085 "name": "raid_bdev1", 00:11:01.085 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:11:01.085 "strip_size_kb": 64, 00:11:01.085 "state": "configuring", 00:11:01.085 "raid_level": "raid0", 00:11:01.085 "superblock": true, 00:11:01.085 "num_base_bdevs": 2, 00:11:01.085 "num_base_bdevs_discovered": 1, 00:11:01.085 "num_base_bdevs_operational": 2, 00:11:01.085 "base_bdevs_list": [ 00:11:01.085 { 00:11:01.085 "name": "pt1", 00:11:01.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.085 "is_configured": true, 00:11:01.085 "data_offset": 2048, 00:11:01.085 "data_size": 63488 00:11:01.085 }, 00:11:01.085 { 00:11:01.085 "name": null, 00:11:01.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.085 "is_configured": false, 00:11:01.085 "data_offset": 2048, 00:11:01.085 "data_size": 63488 00:11:01.085 } 00:11:01.085 ] 00:11:01.085 }' 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.085 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.653 [2024-11-27 14:10:31.952504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.653 [2024-11-27 14:10:31.952596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.653 [2024-11-27 14:10:31.952628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:01.653 [2024-11-27 14:10:31.952646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.653 [2024-11-27 14:10:31.953241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.653 [2024-11-27 14:10:31.953281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.653 [2024-11-27 14:10:31.953384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.653 [2024-11-27 14:10:31.953428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.653 [2024-11-27 14:10:31.953579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:01.653 [2024-11-27 14:10:31.953601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:01.653 [2024-11-27 14:10:31.953923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:01.653 [2024-11-27 14:10:31.954119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:01.653 [2024-11-27 14:10:31.954135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:01.653 [2024-11-27 14:10:31.954312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.653 pt2 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.653 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.653 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.653 "name": "raid_bdev1", 00:11:01.653 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:11:01.653 "strip_size_kb": 64, 00:11:01.653 "state": "online", 00:11:01.653 "raid_level": "raid0", 00:11:01.653 "superblock": true, 00:11:01.653 "num_base_bdevs": 2, 00:11:01.653 "num_base_bdevs_discovered": 2, 00:11:01.653 "num_base_bdevs_operational": 2, 00:11:01.653 "base_bdevs_list": [ 00:11:01.653 { 00:11:01.653 "name": "pt1", 00:11:01.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.653 "is_configured": true, 00:11:01.653 "data_offset": 2048, 00:11:01.653 "data_size": 63488 00:11:01.653 }, 00:11:01.653 { 00:11:01.653 "name": "pt2", 00:11:01.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.653 "is_configured": true, 00:11:01.653 "data_offset": 2048, 00:11:01.653 "data_size": 63488 00:11:01.653 } 00:11:01.653 ] 00:11:01.653 }' 00:11:01.653 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.653 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.221 [2024-11-27 14:10:32.476941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.221 "name": "raid_bdev1", 00:11:02.221 "aliases": [ 00:11:02.221 "e393cf8f-8b85-47eb-a9a4-aa9034bf2501" 00:11:02.221 ], 00:11:02.221 "product_name": "Raid Volume", 00:11:02.221 "block_size": 512, 00:11:02.221 "num_blocks": 126976, 00:11:02.221 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:11:02.221 "assigned_rate_limits": { 00:11:02.221 "rw_ios_per_sec": 0, 00:11:02.221 "rw_mbytes_per_sec": 0, 00:11:02.221 "r_mbytes_per_sec": 0, 00:11:02.221 "w_mbytes_per_sec": 0 00:11:02.221 }, 00:11:02.221 "claimed": false, 00:11:02.221 "zoned": false, 00:11:02.221 "supported_io_types": { 00:11:02.221 "read": true, 00:11:02.221 "write": true, 00:11:02.221 "unmap": true, 00:11:02.221 "flush": true, 00:11:02.221 "reset": true, 00:11:02.221 "nvme_admin": false, 00:11:02.221 "nvme_io": false, 00:11:02.221 "nvme_io_md": false, 00:11:02.221 "write_zeroes": true, 00:11:02.221 "zcopy": false, 00:11:02.221 "get_zone_info": false, 00:11:02.221 "zone_management": false, 00:11:02.221 "zone_append": false, 00:11:02.221 "compare": false, 00:11:02.221 "compare_and_write": false, 00:11:02.221 "abort": false, 00:11:02.221 "seek_hole": false, 00:11:02.221 "seek_data": false, 00:11:02.221 "copy": false, 00:11:02.221 "nvme_iov_md": false 00:11:02.221 }, 00:11:02.221 "memory_domains": [ 00:11:02.221 { 00:11:02.221 "dma_device_id": "system", 00:11:02.221 "dma_device_type": 1 00:11:02.221 }, 00:11:02.221 { 00:11:02.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.221 "dma_device_type": 2 00:11:02.221 }, 00:11:02.221 { 00:11:02.221 "dma_device_id": "system", 00:11:02.221 "dma_device_type": 1 00:11:02.221 }, 00:11:02.221 { 00:11:02.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.221 "dma_device_type": 2 00:11:02.221 } 00:11:02.221 ], 00:11:02.221 "driver_specific": { 00:11:02.221 "raid": { 00:11:02.221 "uuid": "e393cf8f-8b85-47eb-a9a4-aa9034bf2501", 00:11:02.221 "strip_size_kb": 64, 00:11:02.221 "state": "online", 00:11:02.221 "raid_level": "raid0", 00:11:02.221 "superblock": true, 00:11:02.221 "num_base_bdevs": 2, 00:11:02.221 "num_base_bdevs_discovered": 2, 00:11:02.221 "num_base_bdevs_operational": 2, 00:11:02.221 "base_bdevs_list": [ 00:11:02.221 { 00:11:02.221 "name": "pt1", 00:11:02.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.221 "is_configured": true, 00:11:02.221 "data_offset": 2048, 00:11:02.221 "data_size": 63488 00:11:02.221 }, 00:11:02.221 { 00:11:02.221 "name": "pt2", 00:11:02.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.221 "is_configured": true, 00:11:02.221 "data_offset": 2048, 00:11:02.221 "data_size": 63488 00:11:02.221 } 00:11:02.221 ] 00:11:02.221 } 00:11:02.221 } 00:11:02.221 }' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:02.221 pt2' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:02.221 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.222 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.222 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.480 [2024-11-27 14:10:32.732925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e393cf8f-8b85-47eb-a9a4-aa9034bf2501 '!=' e393cf8f-8b85-47eb-a9a4-aa9034bf2501 ']' 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61298 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61298 ']' 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61298 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61298 00:11:02.480 killing process with pid 61298 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61298' 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61298 00:11:02.480 [2024-11-27 14:10:32.813596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.480 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61298 00:11:02.480 [2024-11-27 14:10:32.813696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.480 [2024-11-27 14:10:32.813763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.480 [2024-11-27 14:10:32.813790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:02.813 [2024-11-27 14:10:32.996422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.747 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:03.747 00:11:03.747 real 0m4.926s 00:11:03.747 user 0m7.294s 00:11:03.747 sys 0m0.682s 00:11:03.747 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.747 ************************************ 00:11:03.747 END TEST raid_superblock_test 00:11:03.747 ************************************ 00:11:03.747 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.747 14:10:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:03.747 14:10:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.747 14:10:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.747 14:10:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.747 ************************************ 00:11:03.747 START TEST raid_read_error_test 00:11:03.747 ************************************ 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pWNXgdv9Q3 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61514 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61514 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61514 ']' 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.747 14:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.747 [2024-11-27 14:10:34.215857] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:03.747 [2024-11-27 14:10:34.216055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61514 ] 00:11:04.017 [2024-11-27 14:10:34.398012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.017 [2024-11-27 14:10:34.528416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.275 [2024-11-27 14:10:34.730170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.275 [2024-11-27 14:10:34.730242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.842 BaseBdev1_malloc 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.842 true 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.842 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 [2024-11-27 14:10:35.220579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.843 [2024-11-27 14:10:35.220645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.843 [2024-11-27 14:10:35.220675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.843 [2024-11-27 14:10:35.220695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.843 [2024-11-27 14:10:35.223620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.843 [2024-11-27 14:10:35.223669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.843 BaseBdev1 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 BaseBdev2_malloc 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 true 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 [2024-11-27 14:10:35.276038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.843 [2024-11-27 14:10:35.276103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.843 [2024-11-27 14:10:35.276129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.843 [2024-11-27 14:10:35.276148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.843 [2024-11-27 14:10:35.279014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.843 [2024-11-27 14:10:35.279081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.843 BaseBdev2 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 [2024-11-27 14:10:35.284113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.843 [2024-11-27 14:10:35.286572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.843 [2024-11-27 14:10:35.286847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.843 [2024-11-27 14:10:35.286882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:04.843 [2024-11-27 14:10:35.287187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:04.843 [2024-11-27 14:10:35.287408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.843 [2024-11-27 14:10:35.287430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:04.843 [2024-11-27 14:10:35.287623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.843 "name": "raid_bdev1", 00:11:04.843 "uuid": "35396015-eb16-4a4d-aa4d-0982b6d3bdb1", 00:11:04.843 "strip_size_kb": 64, 00:11:04.843 "state": "online", 00:11:04.843 "raid_level": "raid0", 00:11:04.843 "superblock": true, 00:11:04.843 "num_base_bdevs": 2, 00:11:04.843 "num_base_bdevs_discovered": 2, 00:11:04.843 "num_base_bdevs_operational": 2, 00:11:04.843 "base_bdevs_list": [ 00:11:04.843 { 00:11:04.843 "name": "BaseBdev1", 00:11:04.843 "uuid": "f58398b6-e3b9-559b-9b7f-dabb2fb2f580", 00:11:04.843 "is_configured": true, 00:11:04.843 "data_offset": 2048, 00:11:04.843 "data_size": 63488 00:11:04.843 }, 00:11:04.843 { 00:11:04.843 "name": "BaseBdev2", 00:11:04.843 "uuid": "2e5a44b6-45df-5c45-926f-06de1d6dbf02", 00:11:04.843 "is_configured": true, 00:11:04.843 "data_offset": 2048, 00:11:04.843 "data_size": 63488 00:11:04.843 } 00:11:04.843 ] 00:11:04.843 }' 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.843 14:10:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.410 14:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:05.410 [2024-11-27 14:10:35.909699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.343 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.344 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.344 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.344 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.344 "name": "raid_bdev1", 00:11:06.344 "uuid": "35396015-eb16-4a4d-aa4d-0982b6d3bdb1", 00:11:06.344 "strip_size_kb": 64, 00:11:06.344 "state": "online", 00:11:06.344 "raid_level": "raid0", 00:11:06.344 "superblock": true, 00:11:06.344 "num_base_bdevs": 2, 00:11:06.344 "num_base_bdevs_discovered": 2, 00:11:06.344 "num_base_bdevs_operational": 2, 00:11:06.344 "base_bdevs_list": [ 00:11:06.344 { 00:11:06.344 "name": "BaseBdev1", 00:11:06.344 "uuid": "f58398b6-e3b9-559b-9b7f-dabb2fb2f580", 00:11:06.344 "is_configured": true, 00:11:06.344 "data_offset": 2048, 00:11:06.344 "data_size": 63488 00:11:06.344 }, 00:11:06.344 { 00:11:06.344 "name": "BaseBdev2", 00:11:06.344 "uuid": "2e5a44b6-45df-5c45-926f-06de1d6dbf02", 00:11:06.344 "is_configured": true, 00:11:06.344 "data_offset": 2048, 00:11:06.344 "data_size": 63488 00:11:06.344 } 00:11:06.344 ] 00:11:06.344 }' 00:11:06.344 14:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.344 14:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.910 [2024-11-27 14:10:37.276792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.910 [2024-11-27 14:10:37.276854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.910 [2024-11-27 14:10:37.280324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.910 [2024-11-27 14:10:37.280386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.910 [2024-11-27 14:10:37.280430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.910 [2024-11-27 14:10:37.280448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:06.910 { 00:11:06.910 "results": [ 00:11:06.910 { 00:11:06.910 "job": "raid_bdev1", 00:11:06.910 "core_mask": "0x1", 00:11:06.910 "workload": "randrw", 00:11:06.910 "percentage": 50, 00:11:06.910 "status": "finished", 00:11:06.910 "queue_depth": 1, 00:11:06.910 "io_size": 131072, 00:11:06.910 "runtime": 1.364831, 00:11:06.910 "iops": 10843.833412341894, 00:11:06.910 "mibps": 1355.4791765427367, 00:11:06.910 "io_failed": 1, 00:11:06.910 "io_timeout": 0, 00:11:06.910 "avg_latency_us": 128.0571558432784, 00:11:06.910 "min_latency_us": 40.96, 00:11:06.910 "max_latency_us": 1854.370909090909 00:11:06.910 } 00:11:06.910 ], 00:11:06.910 "core_count": 1 00:11:06.910 } 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61514 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61514 ']' 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61514 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61514 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.910 killing process with pid 61514 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61514' 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61514 00:11:06.910 [2024-11-27 14:10:37.318704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.910 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61514 00:11:07.168 [2024-11-27 14:10:37.438760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pWNXgdv9Q3 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:08.103 00:11:08.103 real 0m4.458s 00:11:08.103 user 0m5.559s 00:11:08.103 sys 0m0.553s 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.103 14:10:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.103 ************************************ 00:11:08.103 END TEST raid_read_error_test 00:11:08.103 ************************************ 00:11:08.103 14:10:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:11:08.103 14:10:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:08.103 14:10:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.103 14:10:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.103 ************************************ 00:11:08.103 START TEST raid_write_error_test 00:11:08.103 ************************************ 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:08.103 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Vf4PMN0QsW 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61656 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61656 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61656 ']' 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.361 14:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.361 [2024-11-27 14:10:38.708219] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:08.361 [2024-11-27 14:10:38.708370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61656 ] 00:11:08.723 [2024-11-27 14:10:38.880547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.723 [2024-11-27 14:10:39.012531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.723 [2024-11-27 14:10:39.218041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.723 [2024-11-27 14:10:39.218097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 BaseBdev1_malloc 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 true 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 [2024-11-27 14:10:39.816783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.315 [2024-11-27 14:10:39.816889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.315 [2024-11-27 14:10:39.816934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.315 [2024-11-27 14:10:39.816965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.315 [2024-11-27 14:10:39.820800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.315 [2024-11-27 14:10:39.820880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.315 BaseBdev1 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.315 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.573 BaseBdev2_malloc 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.573 true 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.573 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 [2024-11-27 14:10:39.887148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.574 [2024-11-27 14:10:39.887223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.574 [2024-11-27 14:10:39.887249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.574 [2024-11-27 14:10:39.887267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.574 [2024-11-27 14:10:39.890071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.574 [2024-11-27 14:10:39.890129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.574 BaseBdev2 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 [2024-11-27 14:10:39.895225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.574 [2024-11-27 14:10:39.897658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.574 [2024-11-27 14:10:39.897928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.574 [2024-11-27 14:10:39.897971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:09.574 [2024-11-27 14:10:39.898289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:09.574 [2024-11-27 14:10:39.898526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.574 [2024-11-27 14:10:39.898548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:09.574 [2024-11-27 14:10:39.898739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.574 "name": "raid_bdev1", 00:11:09.574 "uuid": "d7b5d460-083f-46d4-b007-cfc3fec543c1", 00:11:09.574 "strip_size_kb": 64, 00:11:09.574 "state": "online", 00:11:09.574 "raid_level": "raid0", 00:11:09.574 "superblock": true, 00:11:09.574 "num_base_bdevs": 2, 00:11:09.574 "num_base_bdevs_discovered": 2, 00:11:09.574 "num_base_bdevs_operational": 2, 00:11:09.574 "base_bdevs_list": [ 00:11:09.574 { 00:11:09.574 "name": "BaseBdev1", 00:11:09.574 "uuid": "3353fcc0-9c50-5de0-9010-2fb79d52669e", 00:11:09.574 "is_configured": true, 00:11:09.574 "data_offset": 2048, 00:11:09.574 "data_size": 63488 00:11:09.574 }, 00:11:09.574 { 00:11:09.574 "name": "BaseBdev2", 00:11:09.574 "uuid": "564826dc-f6e6-5570-bd1f-9141d28cea1c", 00:11:09.574 "is_configured": true, 00:11:09.574 "data_offset": 2048, 00:11:09.574 "data_size": 63488 00:11:09.574 } 00:11:09.574 ] 00:11:09.574 }' 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.574 14:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.140 14:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.140 14:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.140 [2024-11-27 14:10:40.568778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.073 "name": "raid_bdev1", 00:11:11.073 "uuid": "d7b5d460-083f-46d4-b007-cfc3fec543c1", 00:11:11.073 "strip_size_kb": 64, 00:11:11.073 "state": "online", 00:11:11.073 "raid_level": "raid0", 00:11:11.073 "superblock": true, 00:11:11.073 "num_base_bdevs": 2, 00:11:11.073 "num_base_bdevs_discovered": 2, 00:11:11.073 "num_base_bdevs_operational": 2, 00:11:11.073 "base_bdevs_list": [ 00:11:11.073 { 00:11:11.073 "name": "BaseBdev1", 00:11:11.073 "uuid": "3353fcc0-9c50-5de0-9010-2fb79d52669e", 00:11:11.073 "is_configured": true, 00:11:11.073 "data_offset": 2048, 00:11:11.073 "data_size": 63488 00:11:11.073 }, 00:11:11.073 { 00:11:11.073 "name": "BaseBdev2", 00:11:11.073 "uuid": "564826dc-f6e6-5570-bd1f-9141d28cea1c", 00:11:11.073 "is_configured": true, 00:11:11.073 "data_offset": 2048, 00:11:11.073 "data_size": 63488 00:11:11.073 } 00:11:11.073 ] 00:11:11.073 }' 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.073 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.637 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.637 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 [2024-11-27 14:10:41.957837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.637 [2024-11-27 14:10:41.957885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.637 [2024-11-27 14:10:41.961336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.637 [2024-11-27 14:10:41.961400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.637 [2024-11-27 14:10:41.961448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.637 [2024-11-27 14:10:41.961468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:11.637 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.637 14:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61656 00:11:11.637 { 00:11:11.637 "results": [ 00:11:11.637 { 00:11:11.637 "job": "raid_bdev1", 00:11:11.637 "core_mask": "0x1", 00:11:11.637 "workload": "randrw", 00:11:11.637 "percentage": 50, 00:11:11.637 "status": "finished", 00:11:11.637 "queue_depth": 1, 00:11:11.637 "io_size": 131072, 00:11:11.637 "runtime": 1.38679, 00:11:11.637 "iops": 10504.113816799949, 00:11:11.637 "mibps": 1313.0142270999936, 00:11:11.637 "io_failed": 1, 00:11:11.637 "io_timeout": 0, 00:11:11.637 "avg_latency_us": 132.4688572712296, 00:11:11.637 "min_latency_us": 43.985454545454544, 00:11:11.637 "max_latency_us": 1832.0290909090909 00:11:11.637 } 00:11:11.637 ], 00:11:11.637 "core_count": 1 00:11:11.637 } 00:11:11.637 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61656 ']' 00:11:11.638 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61656 00:11:11.638 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.638 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.638 14:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61656 00:11:11.638 14:10:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.638 killing process with pid 61656 00:11:11.638 14:10:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.638 14:10:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61656' 00:11:11.638 14:10:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61656 00:11:11.638 [2024-11-27 14:10:42.002145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.638 14:10:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61656 00:11:11.638 [2024-11-27 14:10:42.126226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.011 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Vf4PMN0QsW 00:11:13.011 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.011 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.011 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:13.011 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:13.012 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.012 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:13.012 14:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:13.012 00:11:13.012 real 0m4.669s 00:11:13.012 user 0m5.881s 00:11:13.012 sys 0m0.566s 00:11:13.012 14:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.012 14:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.012 ************************************ 00:11:13.012 END TEST raid_write_error_test 00:11:13.012 ************************************ 00:11:13.012 14:10:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:13.012 14:10:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:13.012 14:10:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.012 14:10:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.012 14:10:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.012 ************************************ 00:11:13.012 START TEST raid_state_function_test 00:11:13.012 ************************************ 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61800 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61800' 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.012 Process raid pid: 61800 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61800 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61800 ']' 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.012 14:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.012 [2024-11-27 14:10:43.437117] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:13.012 [2024-11-27 14:10:43.437321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.270 [2024-11-27 14:10:43.630339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.528 [2024-11-27 14:10:43.791972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.786 [2024-11-27 14:10:44.058213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.786 [2024-11-27 14:10:44.058291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.043 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.044 [2024-11-27 14:10:44.480112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.044 [2024-11-27 14:10:44.480176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.044 [2024-11-27 14:10:44.480193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.044 [2024-11-27 14:10:44.480210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.044 "name": "Existed_Raid", 00:11:14.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.044 "strip_size_kb": 64, 00:11:14.044 "state": "configuring", 00:11:14.044 "raid_level": "concat", 00:11:14.044 "superblock": false, 00:11:14.044 "num_base_bdevs": 2, 00:11:14.044 "num_base_bdevs_discovered": 0, 00:11:14.044 "num_base_bdevs_operational": 2, 00:11:14.044 "base_bdevs_list": [ 00:11:14.044 { 00:11:14.044 "name": "BaseBdev1", 00:11:14.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.044 "is_configured": false, 00:11:14.044 "data_offset": 0, 00:11:14.044 "data_size": 0 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "name": "BaseBdev2", 00:11:14.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.044 "is_configured": false, 00:11:14.044 "data_offset": 0, 00:11:14.044 "data_size": 0 00:11:14.044 } 00:11:14.044 ] 00:11:14.044 }' 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.044 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.610 [2024-11-27 14:10:44.992191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.610 [2024-11-27 14:10:44.992238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.610 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.610 [2024-11-27 14:10:45.000165] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.610 [2024-11-27 14:10:45.000218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.610 [2024-11-27 14:10:45.000233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.610 [2024-11-27 14:10:45.000252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.610 [2024-11-27 14:10:45.045389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.610 BaseBdev1 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.610 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.610 [ 00:11:14.610 { 00:11:14.610 "name": "BaseBdev1", 00:11:14.610 "aliases": [ 00:11:14.610 "51476ae0-c9e2-47ec-9050-913d1c7a8a49" 00:11:14.610 ], 00:11:14.610 "product_name": "Malloc disk", 00:11:14.610 "block_size": 512, 00:11:14.610 "num_blocks": 65536, 00:11:14.611 "uuid": "51476ae0-c9e2-47ec-9050-913d1c7a8a49", 00:11:14.611 "assigned_rate_limits": { 00:11:14.611 "rw_ios_per_sec": 0, 00:11:14.611 "rw_mbytes_per_sec": 0, 00:11:14.611 "r_mbytes_per_sec": 0, 00:11:14.611 "w_mbytes_per_sec": 0 00:11:14.611 }, 00:11:14.611 "claimed": true, 00:11:14.611 "claim_type": "exclusive_write", 00:11:14.611 "zoned": false, 00:11:14.611 "supported_io_types": { 00:11:14.611 "read": true, 00:11:14.611 "write": true, 00:11:14.611 "unmap": true, 00:11:14.611 "flush": true, 00:11:14.611 "reset": true, 00:11:14.611 "nvme_admin": false, 00:11:14.611 "nvme_io": false, 00:11:14.611 "nvme_io_md": false, 00:11:14.611 "write_zeroes": true, 00:11:14.611 "zcopy": true, 00:11:14.611 "get_zone_info": false, 00:11:14.611 "zone_management": false, 00:11:14.611 "zone_append": false, 00:11:14.611 "compare": false, 00:11:14.611 "compare_and_write": false, 00:11:14.611 "abort": true, 00:11:14.611 "seek_hole": false, 00:11:14.611 "seek_data": false, 00:11:14.611 "copy": true, 00:11:14.611 "nvme_iov_md": false 00:11:14.611 }, 00:11:14.611 "memory_domains": [ 00:11:14.611 { 00:11:14.611 "dma_device_id": "system", 00:11:14.611 "dma_device_type": 1 00:11:14.611 }, 00:11:14.611 { 00:11:14.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.611 "dma_device_type": 2 00:11:14.611 } 00:11:14.611 ], 00:11:14.611 "driver_specific": {} 00:11:14.611 } 00:11:14.611 ] 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.868 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.868 "name": "Existed_Raid", 00:11:14.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.868 "strip_size_kb": 64, 00:11:14.868 "state": "configuring", 00:11:14.868 "raid_level": "concat", 00:11:14.868 "superblock": false, 00:11:14.868 "num_base_bdevs": 2, 00:11:14.868 "num_base_bdevs_discovered": 1, 00:11:14.868 "num_base_bdevs_operational": 2, 00:11:14.868 "base_bdevs_list": [ 00:11:14.868 { 00:11:14.868 "name": "BaseBdev1", 00:11:14.868 "uuid": "51476ae0-c9e2-47ec-9050-913d1c7a8a49", 00:11:14.868 "is_configured": true, 00:11:14.868 "data_offset": 0, 00:11:14.868 "data_size": 65536 00:11:14.868 }, 00:11:14.868 { 00:11:14.868 "name": "BaseBdev2", 00:11:14.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.868 "is_configured": false, 00:11:14.868 "data_offset": 0, 00:11:14.868 "data_size": 0 00:11:14.868 } 00:11:14.869 ] 00:11:14.869 }' 00:11:14.869 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.869 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.127 [2024-11-27 14:10:45.581598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.127 [2024-11-27 14:10:45.581666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.127 [2024-11-27 14:10:45.589618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.127 [2024-11-27 14:10:45.592046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.127 [2024-11-27 14:10:45.592104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.127 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.386 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.386 "name": "Existed_Raid", 00:11:15.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.386 "strip_size_kb": 64, 00:11:15.386 "state": "configuring", 00:11:15.386 "raid_level": "concat", 00:11:15.386 "superblock": false, 00:11:15.386 "num_base_bdevs": 2, 00:11:15.386 "num_base_bdevs_discovered": 1, 00:11:15.386 "num_base_bdevs_operational": 2, 00:11:15.386 "base_bdevs_list": [ 00:11:15.386 { 00:11:15.386 "name": "BaseBdev1", 00:11:15.386 "uuid": "51476ae0-c9e2-47ec-9050-913d1c7a8a49", 00:11:15.386 "is_configured": true, 00:11:15.386 "data_offset": 0, 00:11:15.386 "data_size": 65536 00:11:15.386 }, 00:11:15.386 { 00:11:15.386 "name": "BaseBdev2", 00:11:15.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.386 "is_configured": false, 00:11:15.386 "data_offset": 0, 00:11:15.386 "data_size": 0 00:11:15.386 } 00:11:15.386 ] 00:11:15.386 }' 00:11:15.386 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.386 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.643 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.643 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.643 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.901 [2024-11-27 14:10:46.155441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.901 [2024-11-27 14:10:46.155518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.901 [2024-11-27 14:10:46.155532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:15.901 [2024-11-27 14:10:46.155929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:15.901 [2024-11-27 14:10:46.156215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.901 [2024-11-27 14:10:46.156252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.901 [2024-11-27 14:10:46.156618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.901 BaseBdev2 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.901 [ 00:11:15.901 { 00:11:15.901 "name": "BaseBdev2", 00:11:15.901 "aliases": [ 00:11:15.901 "6120d46a-c4bc-4f05-9df3-863c267c6999" 00:11:15.901 ], 00:11:15.901 "product_name": "Malloc disk", 00:11:15.901 "block_size": 512, 00:11:15.901 "num_blocks": 65536, 00:11:15.901 "uuid": "6120d46a-c4bc-4f05-9df3-863c267c6999", 00:11:15.901 "assigned_rate_limits": { 00:11:15.901 "rw_ios_per_sec": 0, 00:11:15.901 "rw_mbytes_per_sec": 0, 00:11:15.901 "r_mbytes_per_sec": 0, 00:11:15.901 "w_mbytes_per_sec": 0 00:11:15.901 }, 00:11:15.901 "claimed": true, 00:11:15.901 "claim_type": "exclusive_write", 00:11:15.901 "zoned": false, 00:11:15.901 "supported_io_types": { 00:11:15.901 "read": true, 00:11:15.901 "write": true, 00:11:15.901 "unmap": true, 00:11:15.901 "flush": true, 00:11:15.901 "reset": true, 00:11:15.901 "nvme_admin": false, 00:11:15.901 "nvme_io": false, 00:11:15.901 "nvme_io_md": false, 00:11:15.901 "write_zeroes": true, 00:11:15.901 "zcopy": true, 00:11:15.901 "get_zone_info": false, 00:11:15.901 "zone_management": false, 00:11:15.901 "zone_append": false, 00:11:15.901 "compare": false, 00:11:15.901 "compare_and_write": false, 00:11:15.901 "abort": true, 00:11:15.901 "seek_hole": false, 00:11:15.901 "seek_data": false, 00:11:15.901 "copy": true, 00:11:15.901 "nvme_iov_md": false 00:11:15.901 }, 00:11:15.901 "memory_domains": [ 00:11:15.901 { 00:11:15.901 "dma_device_id": "system", 00:11:15.901 "dma_device_type": 1 00:11:15.901 }, 00:11:15.901 { 00:11:15.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.901 "dma_device_type": 2 00:11:15.901 } 00:11:15.901 ], 00:11:15.901 "driver_specific": {} 00:11:15.901 } 00:11:15.901 ] 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.901 "name": "Existed_Raid", 00:11:15.901 "uuid": "cd765f37-6206-48b1-846a-a5fd512d1474", 00:11:15.901 "strip_size_kb": 64, 00:11:15.901 "state": "online", 00:11:15.901 "raid_level": "concat", 00:11:15.901 "superblock": false, 00:11:15.901 "num_base_bdevs": 2, 00:11:15.901 "num_base_bdevs_discovered": 2, 00:11:15.901 "num_base_bdevs_operational": 2, 00:11:15.901 "base_bdevs_list": [ 00:11:15.901 { 00:11:15.901 "name": "BaseBdev1", 00:11:15.901 "uuid": "51476ae0-c9e2-47ec-9050-913d1c7a8a49", 00:11:15.901 "is_configured": true, 00:11:15.901 "data_offset": 0, 00:11:15.901 "data_size": 65536 00:11:15.901 }, 00:11:15.901 { 00:11:15.901 "name": "BaseBdev2", 00:11:15.901 "uuid": "6120d46a-c4bc-4f05-9df3-863c267c6999", 00:11:15.901 "is_configured": true, 00:11:15.901 "data_offset": 0, 00:11:15.901 "data_size": 65536 00:11:15.901 } 00:11:15.901 ] 00:11:15.901 }' 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.901 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.466 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.466 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.466 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.466 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.466 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.466 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.467 [2024-11-27 14:10:46.696079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.467 "name": "Existed_Raid", 00:11:16.467 "aliases": [ 00:11:16.467 "cd765f37-6206-48b1-846a-a5fd512d1474" 00:11:16.467 ], 00:11:16.467 "product_name": "Raid Volume", 00:11:16.467 "block_size": 512, 00:11:16.467 "num_blocks": 131072, 00:11:16.467 "uuid": "cd765f37-6206-48b1-846a-a5fd512d1474", 00:11:16.467 "assigned_rate_limits": { 00:11:16.467 "rw_ios_per_sec": 0, 00:11:16.467 "rw_mbytes_per_sec": 0, 00:11:16.467 "r_mbytes_per_sec": 0, 00:11:16.467 "w_mbytes_per_sec": 0 00:11:16.467 }, 00:11:16.467 "claimed": false, 00:11:16.467 "zoned": false, 00:11:16.467 "supported_io_types": { 00:11:16.467 "read": true, 00:11:16.467 "write": true, 00:11:16.467 "unmap": true, 00:11:16.467 "flush": true, 00:11:16.467 "reset": true, 00:11:16.467 "nvme_admin": false, 00:11:16.467 "nvme_io": false, 00:11:16.467 "nvme_io_md": false, 00:11:16.467 "write_zeroes": true, 00:11:16.467 "zcopy": false, 00:11:16.467 "get_zone_info": false, 00:11:16.467 "zone_management": false, 00:11:16.467 "zone_append": false, 00:11:16.467 "compare": false, 00:11:16.467 "compare_and_write": false, 00:11:16.467 "abort": false, 00:11:16.467 "seek_hole": false, 00:11:16.467 "seek_data": false, 00:11:16.467 "copy": false, 00:11:16.467 "nvme_iov_md": false 00:11:16.467 }, 00:11:16.467 "memory_domains": [ 00:11:16.467 { 00:11:16.467 "dma_device_id": "system", 00:11:16.467 "dma_device_type": 1 00:11:16.467 }, 00:11:16.467 { 00:11:16.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.467 "dma_device_type": 2 00:11:16.467 }, 00:11:16.467 { 00:11:16.467 "dma_device_id": "system", 00:11:16.467 "dma_device_type": 1 00:11:16.467 }, 00:11:16.467 { 00:11:16.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.467 "dma_device_type": 2 00:11:16.467 } 00:11:16.467 ], 00:11:16.467 "driver_specific": { 00:11:16.467 "raid": { 00:11:16.467 "uuid": "cd765f37-6206-48b1-846a-a5fd512d1474", 00:11:16.467 "strip_size_kb": 64, 00:11:16.467 "state": "online", 00:11:16.467 "raid_level": "concat", 00:11:16.467 "superblock": false, 00:11:16.467 "num_base_bdevs": 2, 00:11:16.467 "num_base_bdevs_discovered": 2, 00:11:16.467 "num_base_bdevs_operational": 2, 00:11:16.467 "base_bdevs_list": [ 00:11:16.467 { 00:11:16.467 "name": "BaseBdev1", 00:11:16.467 "uuid": "51476ae0-c9e2-47ec-9050-913d1c7a8a49", 00:11:16.467 "is_configured": true, 00:11:16.467 "data_offset": 0, 00:11:16.467 "data_size": 65536 00:11:16.467 }, 00:11:16.467 { 00:11:16.467 "name": "BaseBdev2", 00:11:16.467 "uuid": "6120d46a-c4bc-4f05-9df3-863c267c6999", 00:11:16.467 "is_configured": true, 00:11:16.467 "data_offset": 0, 00:11:16.467 "data_size": 65536 00:11:16.467 } 00:11:16.467 ] 00:11:16.467 } 00:11:16.467 } 00:11:16.467 }' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:16.467 BaseBdev2' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.467 14:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.467 [2024-11-27 14:10:46.955776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.467 [2024-11-27 14:10:46.955844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.467 [2024-11-27 14:10:46.955928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.724 "name": "Existed_Raid", 00:11:16.724 "uuid": "cd765f37-6206-48b1-846a-a5fd512d1474", 00:11:16.724 "strip_size_kb": 64, 00:11:16.724 "state": "offline", 00:11:16.724 "raid_level": "concat", 00:11:16.724 "superblock": false, 00:11:16.724 "num_base_bdevs": 2, 00:11:16.724 "num_base_bdevs_discovered": 1, 00:11:16.724 "num_base_bdevs_operational": 1, 00:11:16.724 "base_bdevs_list": [ 00:11:16.724 { 00:11:16.724 "name": null, 00:11:16.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.724 "is_configured": false, 00:11:16.724 "data_offset": 0, 00:11:16.724 "data_size": 65536 00:11:16.724 }, 00:11:16.724 { 00:11:16.724 "name": "BaseBdev2", 00:11:16.724 "uuid": "6120d46a-c4bc-4f05-9df3-863c267c6999", 00:11:16.724 "is_configured": true, 00:11:16.724 "data_offset": 0, 00:11:16.724 "data_size": 65536 00:11:16.724 } 00:11:16.724 ] 00:11:16.724 }' 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.724 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.290 [2024-11-27 14:10:47.631405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.290 [2024-11-27 14:10:47.631475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61800 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61800 ']' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61800 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.290 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61800 00:11:17.547 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.547 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.547 killing process with pid 61800 00:11:17.547 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61800' 00:11:17.547 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61800 00:11:17.547 [2024-11-27 14:10:47.808008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.547 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61800 00:11:17.547 [2024-11-27 14:10:47.822599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:18.483 00:11:18.483 real 0m5.538s 00:11:18.483 user 0m8.406s 00:11:18.483 sys 0m0.780s 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.483 ************************************ 00:11:18.483 END TEST raid_state_function_test 00:11:18.483 ************************************ 00:11:18.483 14:10:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:18.483 14:10:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.483 14:10:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.483 14:10:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.483 ************************************ 00:11:18.483 START TEST raid_state_function_test_sb 00:11:18.483 ************************************ 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.483 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62053 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62053' 00:11:18.484 Process raid pid: 62053 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62053 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62053 ']' 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.484 14:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.742 [2024-11-27 14:10:49.025852] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:18.742 [2024-11-27 14:10:49.026059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.742 [2024-11-27 14:10:49.211294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.999 [2024-11-27 14:10:49.343444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.257 [2024-11-27 14:10:49.553045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.257 [2024-11-27 14:10:49.553122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.824 [2024-11-27 14:10:50.037869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.824 [2024-11-27 14:10:50.037936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.824 [2024-11-27 14:10:50.037982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.824 [2024-11-27 14:10:50.038010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.824 "name": "Existed_Raid", 00:11:19.824 "uuid": "fdba1d4a-028d-4bab-b0c3-fed135642e90", 00:11:19.824 "strip_size_kb": 64, 00:11:19.824 "state": "configuring", 00:11:19.824 "raid_level": "concat", 00:11:19.824 "superblock": true, 00:11:19.824 "num_base_bdevs": 2, 00:11:19.824 "num_base_bdevs_discovered": 0, 00:11:19.824 "num_base_bdevs_operational": 2, 00:11:19.824 "base_bdevs_list": [ 00:11:19.824 { 00:11:19.824 "name": "BaseBdev1", 00:11:19.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.824 "is_configured": false, 00:11:19.824 "data_offset": 0, 00:11:19.824 "data_size": 0 00:11:19.824 }, 00:11:19.824 { 00:11:19.824 "name": "BaseBdev2", 00:11:19.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.824 "is_configured": false, 00:11:19.824 "data_offset": 0, 00:11:19.824 "data_size": 0 00:11:19.824 } 00:11:19.824 ] 00:11:19.824 }' 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.824 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 [2024-11-27 14:10:50.553914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.104 [2024-11-27 14:10:50.553975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 [2024-11-27 14:10:50.565909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.104 [2024-11-27 14:10:50.566005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.104 [2024-11-27 14:10:50.566030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.104 [2024-11-27 14:10:50.566059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 [2024-11-27 14:10:50.611081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.104 BaseBdev1 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.363 [ 00:11:20.363 { 00:11:20.363 "name": "BaseBdev1", 00:11:20.363 "aliases": [ 00:11:20.363 "5dc3d4d5-b2b7-4ef6-b009-fd0764c6f839" 00:11:20.363 ], 00:11:20.363 "product_name": "Malloc disk", 00:11:20.363 "block_size": 512, 00:11:20.363 "num_blocks": 65536, 00:11:20.363 "uuid": "5dc3d4d5-b2b7-4ef6-b009-fd0764c6f839", 00:11:20.363 "assigned_rate_limits": { 00:11:20.363 "rw_ios_per_sec": 0, 00:11:20.363 "rw_mbytes_per_sec": 0, 00:11:20.363 "r_mbytes_per_sec": 0, 00:11:20.363 "w_mbytes_per_sec": 0 00:11:20.363 }, 00:11:20.363 "claimed": true, 00:11:20.363 "claim_type": "exclusive_write", 00:11:20.363 "zoned": false, 00:11:20.363 "supported_io_types": { 00:11:20.363 "read": true, 00:11:20.363 "write": true, 00:11:20.363 "unmap": true, 00:11:20.363 "flush": true, 00:11:20.363 "reset": true, 00:11:20.363 "nvme_admin": false, 00:11:20.363 "nvme_io": false, 00:11:20.363 "nvme_io_md": false, 00:11:20.363 "write_zeroes": true, 00:11:20.363 "zcopy": true, 00:11:20.363 "get_zone_info": false, 00:11:20.363 "zone_management": false, 00:11:20.363 "zone_append": false, 00:11:20.363 "compare": false, 00:11:20.363 "compare_and_write": false, 00:11:20.363 "abort": true, 00:11:20.363 "seek_hole": false, 00:11:20.363 "seek_data": false, 00:11:20.363 "copy": true, 00:11:20.363 "nvme_iov_md": false 00:11:20.363 }, 00:11:20.363 "memory_domains": [ 00:11:20.363 { 00:11:20.363 "dma_device_id": "system", 00:11:20.363 "dma_device_type": 1 00:11:20.363 }, 00:11:20.363 { 00:11:20.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.363 "dma_device_type": 2 00:11:20.363 } 00:11:20.363 ], 00:11:20.363 "driver_specific": {} 00:11:20.363 } 00:11:20.363 ] 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.363 "name": "Existed_Raid", 00:11:20.363 "uuid": "6ced6aac-0ba3-4cc2-8c89-2f0a074b84b1", 00:11:20.363 "strip_size_kb": 64, 00:11:20.363 "state": "configuring", 00:11:20.363 "raid_level": "concat", 00:11:20.363 "superblock": true, 00:11:20.363 "num_base_bdevs": 2, 00:11:20.363 "num_base_bdevs_discovered": 1, 00:11:20.363 "num_base_bdevs_operational": 2, 00:11:20.363 "base_bdevs_list": [ 00:11:20.363 { 00:11:20.363 "name": "BaseBdev1", 00:11:20.363 "uuid": "5dc3d4d5-b2b7-4ef6-b009-fd0764c6f839", 00:11:20.363 "is_configured": true, 00:11:20.363 "data_offset": 2048, 00:11:20.363 "data_size": 63488 00:11:20.363 }, 00:11:20.363 { 00:11:20.363 "name": "BaseBdev2", 00:11:20.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.363 "is_configured": false, 00:11:20.363 "data_offset": 0, 00:11:20.363 "data_size": 0 00:11:20.363 } 00:11:20.363 ] 00:11:20.363 }' 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.363 14:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.622 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.622 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.622 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 [2024-11-27 14:10:51.135320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.881 [2024-11-27 14:10:51.135386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 [2024-11-27 14:10:51.143310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.881 [2024-11-27 14:10:51.145804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.881 [2024-11-27 14:10:51.145878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.881 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.881 "name": "Existed_Raid", 00:11:20.881 "uuid": "26feae20-e299-4119-8b9f-7e3e15cee4aa", 00:11:20.881 "strip_size_kb": 64, 00:11:20.881 "state": "configuring", 00:11:20.881 "raid_level": "concat", 00:11:20.881 "superblock": true, 00:11:20.881 "num_base_bdevs": 2, 00:11:20.881 "num_base_bdevs_discovered": 1, 00:11:20.881 "num_base_bdevs_operational": 2, 00:11:20.881 "base_bdevs_list": [ 00:11:20.881 { 00:11:20.881 "name": "BaseBdev1", 00:11:20.881 "uuid": "5dc3d4d5-b2b7-4ef6-b009-fd0764c6f839", 00:11:20.882 "is_configured": true, 00:11:20.882 "data_offset": 2048, 00:11:20.882 "data_size": 63488 00:11:20.882 }, 00:11:20.882 { 00:11:20.882 "name": "BaseBdev2", 00:11:20.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.882 "is_configured": false, 00:11:20.882 "data_offset": 0, 00:11:20.882 "data_size": 0 00:11:20.882 } 00:11:20.882 ] 00:11:20.882 }' 00:11:20.882 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.882 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.448 [2024-11-27 14:10:51.693777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.448 [2024-11-27 14:10:51.694114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.448 [2024-11-27 14:10:51.694134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:21.448 BaseBdev2 00:11:21.448 [2024-11-27 14:10:51.694458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:21.448 [2024-11-27 14:10:51.694664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.448 [2024-11-27 14:10:51.694696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.448 [2024-11-27 14:10:51.694885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.448 [ 00:11:21.448 { 00:11:21.448 "name": "BaseBdev2", 00:11:21.448 "aliases": [ 00:11:21.448 "341487c2-1ed8-4e73-a048-429f96c8e9d8" 00:11:21.448 ], 00:11:21.448 "product_name": "Malloc disk", 00:11:21.448 "block_size": 512, 00:11:21.448 "num_blocks": 65536, 00:11:21.448 "uuid": "341487c2-1ed8-4e73-a048-429f96c8e9d8", 00:11:21.448 "assigned_rate_limits": { 00:11:21.448 "rw_ios_per_sec": 0, 00:11:21.448 "rw_mbytes_per_sec": 0, 00:11:21.448 "r_mbytes_per_sec": 0, 00:11:21.448 "w_mbytes_per_sec": 0 00:11:21.448 }, 00:11:21.448 "claimed": true, 00:11:21.448 "claim_type": "exclusive_write", 00:11:21.448 "zoned": false, 00:11:21.448 "supported_io_types": { 00:11:21.448 "read": true, 00:11:21.448 "write": true, 00:11:21.448 "unmap": true, 00:11:21.448 "flush": true, 00:11:21.448 "reset": true, 00:11:21.448 "nvme_admin": false, 00:11:21.448 "nvme_io": false, 00:11:21.448 "nvme_io_md": false, 00:11:21.448 "write_zeroes": true, 00:11:21.448 "zcopy": true, 00:11:21.448 "get_zone_info": false, 00:11:21.448 "zone_management": false, 00:11:21.448 "zone_append": false, 00:11:21.448 "compare": false, 00:11:21.448 "compare_and_write": false, 00:11:21.448 "abort": true, 00:11:21.448 "seek_hole": false, 00:11:21.448 "seek_data": false, 00:11:21.448 "copy": true, 00:11:21.448 "nvme_iov_md": false 00:11:21.448 }, 00:11:21.448 "memory_domains": [ 00:11:21.448 { 00:11:21.448 "dma_device_id": "system", 00:11:21.448 "dma_device_type": 1 00:11:21.448 }, 00:11:21.448 { 00:11:21.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.448 "dma_device_type": 2 00:11:21.448 } 00:11:21.448 ], 00:11:21.448 "driver_specific": {} 00:11:21.448 } 00:11:21.448 ] 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.448 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.449 "name": "Existed_Raid", 00:11:21.449 "uuid": "26feae20-e299-4119-8b9f-7e3e15cee4aa", 00:11:21.449 "strip_size_kb": 64, 00:11:21.449 "state": "online", 00:11:21.449 "raid_level": "concat", 00:11:21.449 "superblock": true, 00:11:21.449 "num_base_bdevs": 2, 00:11:21.449 "num_base_bdevs_discovered": 2, 00:11:21.449 "num_base_bdevs_operational": 2, 00:11:21.449 "base_bdevs_list": [ 00:11:21.449 { 00:11:21.449 "name": "BaseBdev1", 00:11:21.449 "uuid": "5dc3d4d5-b2b7-4ef6-b009-fd0764c6f839", 00:11:21.449 "is_configured": true, 00:11:21.449 "data_offset": 2048, 00:11:21.449 "data_size": 63488 00:11:21.449 }, 00:11:21.449 { 00:11:21.449 "name": "BaseBdev2", 00:11:21.449 "uuid": "341487c2-1ed8-4e73-a048-429f96c8e9d8", 00:11:21.449 "is_configured": true, 00:11:21.449 "data_offset": 2048, 00:11:21.449 "data_size": 63488 00:11:21.449 } 00:11:21.449 ] 00:11:21.449 }' 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.449 14:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.707 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.965 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.965 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.965 [2024-11-27 14:10:52.222312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.965 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.965 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.965 "name": "Existed_Raid", 00:11:21.965 "aliases": [ 00:11:21.965 "26feae20-e299-4119-8b9f-7e3e15cee4aa" 00:11:21.965 ], 00:11:21.965 "product_name": "Raid Volume", 00:11:21.965 "block_size": 512, 00:11:21.965 "num_blocks": 126976, 00:11:21.965 "uuid": "26feae20-e299-4119-8b9f-7e3e15cee4aa", 00:11:21.965 "assigned_rate_limits": { 00:11:21.965 "rw_ios_per_sec": 0, 00:11:21.965 "rw_mbytes_per_sec": 0, 00:11:21.965 "r_mbytes_per_sec": 0, 00:11:21.965 "w_mbytes_per_sec": 0 00:11:21.965 }, 00:11:21.965 "claimed": false, 00:11:21.965 "zoned": false, 00:11:21.965 "supported_io_types": { 00:11:21.965 "read": true, 00:11:21.965 "write": true, 00:11:21.965 "unmap": true, 00:11:21.965 "flush": true, 00:11:21.965 "reset": true, 00:11:21.965 "nvme_admin": false, 00:11:21.965 "nvme_io": false, 00:11:21.965 "nvme_io_md": false, 00:11:21.965 "write_zeroes": true, 00:11:21.965 "zcopy": false, 00:11:21.966 "get_zone_info": false, 00:11:21.966 "zone_management": false, 00:11:21.966 "zone_append": false, 00:11:21.966 "compare": false, 00:11:21.966 "compare_and_write": false, 00:11:21.966 "abort": false, 00:11:21.966 "seek_hole": false, 00:11:21.966 "seek_data": false, 00:11:21.966 "copy": false, 00:11:21.966 "nvme_iov_md": false 00:11:21.966 }, 00:11:21.966 "memory_domains": [ 00:11:21.966 { 00:11:21.966 "dma_device_id": "system", 00:11:21.966 "dma_device_type": 1 00:11:21.966 }, 00:11:21.966 { 00:11:21.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.966 "dma_device_type": 2 00:11:21.966 }, 00:11:21.966 { 00:11:21.966 "dma_device_id": "system", 00:11:21.966 "dma_device_type": 1 00:11:21.966 }, 00:11:21.966 { 00:11:21.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.966 "dma_device_type": 2 00:11:21.966 } 00:11:21.966 ], 00:11:21.966 "driver_specific": { 00:11:21.966 "raid": { 00:11:21.966 "uuid": "26feae20-e299-4119-8b9f-7e3e15cee4aa", 00:11:21.966 "strip_size_kb": 64, 00:11:21.966 "state": "online", 00:11:21.966 "raid_level": "concat", 00:11:21.966 "superblock": true, 00:11:21.966 "num_base_bdevs": 2, 00:11:21.966 "num_base_bdevs_discovered": 2, 00:11:21.966 "num_base_bdevs_operational": 2, 00:11:21.966 "base_bdevs_list": [ 00:11:21.966 { 00:11:21.966 "name": "BaseBdev1", 00:11:21.966 "uuid": "5dc3d4d5-b2b7-4ef6-b009-fd0764c6f839", 00:11:21.966 "is_configured": true, 00:11:21.966 "data_offset": 2048, 00:11:21.966 "data_size": 63488 00:11:21.966 }, 00:11:21.966 { 00:11:21.966 "name": "BaseBdev2", 00:11:21.966 "uuid": "341487c2-1ed8-4e73-a048-429f96c8e9d8", 00:11:21.966 "is_configured": true, 00:11:21.966 "data_offset": 2048, 00:11:21.966 "data_size": 63488 00:11:21.966 } 00:11:21.966 ] 00:11:21.966 } 00:11:21.966 } 00:11:21.966 }' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:21.966 BaseBdev2' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.966 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 [2024-11-27 14:10:52.474068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.966 [2024-11-27 14:10:52.474115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.966 [2024-11-27 14:10:52.474188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.225 "name": "Existed_Raid", 00:11:22.225 "uuid": "26feae20-e299-4119-8b9f-7e3e15cee4aa", 00:11:22.225 "strip_size_kb": 64, 00:11:22.225 "state": "offline", 00:11:22.225 "raid_level": "concat", 00:11:22.225 "superblock": true, 00:11:22.225 "num_base_bdevs": 2, 00:11:22.225 "num_base_bdevs_discovered": 1, 00:11:22.225 "num_base_bdevs_operational": 1, 00:11:22.225 "base_bdevs_list": [ 00:11:22.225 { 00:11:22.225 "name": null, 00:11:22.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.225 "is_configured": false, 00:11:22.225 "data_offset": 0, 00:11:22.225 "data_size": 63488 00:11:22.225 }, 00:11:22.225 { 00:11:22.225 "name": "BaseBdev2", 00:11:22.225 "uuid": "341487c2-1ed8-4e73-a048-429f96c8e9d8", 00:11:22.225 "is_configured": true, 00:11:22.225 "data_offset": 2048, 00:11:22.225 "data_size": 63488 00:11:22.225 } 00:11:22.225 ] 00:11:22.225 }' 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.225 14:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.794 [2024-11-27 14:10:53.120491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.794 [2024-11-27 14:10:53.120572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62053 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62053 ']' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62053 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62053 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.794 killing process with pid 62053 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62053' 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62053 00:11:22.794 [2024-11-27 14:10:53.292850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.794 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62053 00:11:23.053 [2024-11-27 14:10:53.307762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.988 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.988 00:11:23.988 real 0m5.465s 00:11:23.988 user 0m8.270s 00:11:23.988 sys 0m0.740s 00:11:23.988 14:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.988 ************************************ 00:11:23.988 END TEST raid_state_function_test_sb 00:11:23.988 14:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.988 ************************************ 00:11:23.988 14:10:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:23.988 14:10:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.988 14:10:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.988 14:10:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.988 ************************************ 00:11:23.988 START TEST raid_superblock_test 00:11:23.988 ************************************ 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62310 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62310 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62310 ']' 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.988 14:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.247 [2024-11-27 14:10:54.542894] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:24.247 [2024-11-27 14:10:54.543048] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62310 ] 00:11:24.247 [2024-11-27 14:10:54.716648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.505 [2024-11-27 14:10:54.850350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.762 [2024-11-27 14:10:55.052728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.762 [2024-11-27 14:10:55.052802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:25.020 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.279 malloc1 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.279 [2024-11-27 14:10:55.583143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.279 [2024-11-27 14:10:55.583215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.279 [2024-11-27 14:10:55.583248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:25.279 [2024-11-27 14:10:55.583265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.279 [2024-11-27 14:10:55.586105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.279 [2024-11-27 14:10:55.586149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.279 pt1 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.279 malloc2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.279 [2024-11-27 14:10:55.639310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.279 [2024-11-27 14:10:55.639382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.279 [2024-11-27 14:10:55.639420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:25.279 [2024-11-27 14:10:55.639435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.279 [2024-11-27 14:10:55.642265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.279 [2024-11-27 14:10:55.642311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.279 pt2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.279 [2024-11-27 14:10:55.647391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.279 [2024-11-27 14:10:55.649856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.279 [2024-11-27 14:10:55.650098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:25.279 [2024-11-27 14:10:55.650117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:25.279 [2024-11-27 14:10:55.650432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:25.279 [2024-11-27 14:10:55.650643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:25.279 [2024-11-27 14:10:55.650672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:25.279 [2024-11-27 14:10:55.650875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.279 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.279 "name": "raid_bdev1", 00:11:25.279 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:25.279 "strip_size_kb": 64, 00:11:25.279 "state": "online", 00:11:25.279 "raid_level": "concat", 00:11:25.279 "superblock": true, 00:11:25.279 "num_base_bdevs": 2, 00:11:25.279 "num_base_bdevs_discovered": 2, 00:11:25.279 "num_base_bdevs_operational": 2, 00:11:25.279 "base_bdevs_list": [ 00:11:25.279 { 00:11:25.279 "name": "pt1", 00:11:25.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.280 "is_configured": true, 00:11:25.280 "data_offset": 2048, 00:11:25.280 "data_size": 63488 00:11:25.280 }, 00:11:25.280 { 00:11:25.280 "name": "pt2", 00:11:25.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.280 "is_configured": true, 00:11:25.280 "data_offset": 2048, 00:11:25.280 "data_size": 63488 00:11:25.280 } 00:11:25.280 ] 00:11:25.280 }' 00:11:25.280 14:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.280 14:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 [2024-11-27 14:10:56.195804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.845 "name": "raid_bdev1", 00:11:25.845 "aliases": [ 00:11:25.845 "89d99dae-1605-42da-bbe5-0dba041cee26" 00:11:25.845 ], 00:11:25.845 "product_name": "Raid Volume", 00:11:25.845 "block_size": 512, 00:11:25.845 "num_blocks": 126976, 00:11:25.845 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:25.845 "assigned_rate_limits": { 00:11:25.845 "rw_ios_per_sec": 0, 00:11:25.845 "rw_mbytes_per_sec": 0, 00:11:25.845 "r_mbytes_per_sec": 0, 00:11:25.845 "w_mbytes_per_sec": 0 00:11:25.845 }, 00:11:25.845 "claimed": false, 00:11:25.845 "zoned": false, 00:11:25.845 "supported_io_types": { 00:11:25.845 "read": true, 00:11:25.845 "write": true, 00:11:25.845 "unmap": true, 00:11:25.845 "flush": true, 00:11:25.845 "reset": true, 00:11:25.845 "nvme_admin": false, 00:11:25.845 "nvme_io": false, 00:11:25.845 "nvme_io_md": false, 00:11:25.845 "write_zeroes": true, 00:11:25.845 "zcopy": false, 00:11:25.845 "get_zone_info": false, 00:11:25.845 "zone_management": false, 00:11:25.845 "zone_append": false, 00:11:25.845 "compare": false, 00:11:25.845 "compare_and_write": false, 00:11:25.845 "abort": false, 00:11:25.845 "seek_hole": false, 00:11:25.845 "seek_data": false, 00:11:25.845 "copy": false, 00:11:25.845 "nvme_iov_md": false 00:11:25.845 }, 00:11:25.845 "memory_domains": [ 00:11:25.845 { 00:11:25.845 "dma_device_id": "system", 00:11:25.845 "dma_device_type": 1 00:11:25.845 }, 00:11:25.845 { 00:11:25.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.845 "dma_device_type": 2 00:11:25.845 }, 00:11:25.845 { 00:11:25.845 "dma_device_id": "system", 00:11:25.845 "dma_device_type": 1 00:11:25.845 }, 00:11:25.845 { 00:11:25.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.845 "dma_device_type": 2 00:11:25.845 } 00:11:25.845 ], 00:11:25.845 "driver_specific": { 00:11:25.845 "raid": { 00:11:25.845 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:25.845 "strip_size_kb": 64, 00:11:25.845 "state": "online", 00:11:25.845 "raid_level": "concat", 00:11:25.845 "superblock": true, 00:11:25.845 "num_base_bdevs": 2, 00:11:25.845 "num_base_bdevs_discovered": 2, 00:11:25.845 "num_base_bdevs_operational": 2, 00:11:25.845 "base_bdevs_list": [ 00:11:25.845 { 00:11:25.845 "name": "pt1", 00:11:25.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.845 "is_configured": true, 00:11:25.845 "data_offset": 2048, 00:11:25.845 "data_size": 63488 00:11:25.845 }, 00:11:25.845 { 00:11:25.845 "name": "pt2", 00:11:25.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.845 "is_configured": true, 00:11:25.845 "data_offset": 2048, 00:11:25.845 "data_size": 63488 00:11:25.845 } 00:11:25.845 ] 00:11:25.845 } 00:11:25.845 } 00:11:25.845 }' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:25.845 pt2' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.845 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:26.104 [2024-11-27 14:10:56.451809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=89d99dae-1605-42da-bbe5-0dba041cee26 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 89d99dae-1605-42da-bbe5-0dba041cee26 ']' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 [2024-11-27 14:10:56.511499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.104 [2024-11-27 14:10:56.511531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.104 [2024-11-27 14:10:56.511629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.104 [2024-11-27 14:10:56.511695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.104 [2024-11-27 14:10:56.511714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:26.104 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.363 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.363 [2024-11-27 14:10:56.643598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:26.363 [2024-11-27 14:10:56.646213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:26.364 [2024-11-27 14:10:56.646308] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:26.364 [2024-11-27 14:10:56.646399] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:26.364 [2024-11-27 14:10:56.646426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.364 [2024-11-27 14:10:56.646442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:26.364 request: 00:11:26.364 { 00:11:26.364 "name": "raid_bdev1", 00:11:26.364 "raid_level": "concat", 00:11:26.364 "base_bdevs": [ 00:11:26.364 "malloc1", 00:11:26.364 "malloc2" 00:11:26.364 ], 00:11:26.364 "strip_size_kb": 64, 00:11:26.364 "superblock": false, 00:11:26.364 "method": "bdev_raid_create", 00:11:26.364 "req_id": 1 00:11:26.364 } 00:11:26.364 Got JSON-RPC error response 00:11:26.364 response: 00:11:26.364 { 00:11:26.364 "code": -17, 00:11:26.364 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:26.364 } 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.364 [2024-11-27 14:10:56.707566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.364 [2024-11-27 14:10:56.707751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.364 [2024-11-27 14:10:56.707842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:26.364 [2024-11-27 14:10:56.708065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.364 [2024-11-27 14:10:56.710986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.364 [2024-11-27 14:10:56.711143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.364 [2024-11-27 14:10:56.711338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:26.364 [2024-11-27 14:10:56.711513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.364 pt1 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.364 "name": "raid_bdev1", 00:11:26.364 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:26.364 "strip_size_kb": 64, 00:11:26.364 "state": "configuring", 00:11:26.364 "raid_level": "concat", 00:11:26.364 "superblock": true, 00:11:26.364 "num_base_bdevs": 2, 00:11:26.364 "num_base_bdevs_discovered": 1, 00:11:26.364 "num_base_bdevs_operational": 2, 00:11:26.364 "base_bdevs_list": [ 00:11:26.364 { 00:11:26.364 "name": "pt1", 00:11:26.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.364 "is_configured": true, 00:11:26.364 "data_offset": 2048, 00:11:26.364 "data_size": 63488 00:11:26.364 }, 00:11:26.364 { 00:11:26.364 "name": null, 00:11:26.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.364 "is_configured": false, 00:11:26.364 "data_offset": 2048, 00:11:26.364 "data_size": 63488 00:11:26.364 } 00:11:26.364 ] 00:11:26.364 }' 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.364 14:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.931 [2024-11-27 14:10:57.219992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.931 [2024-11-27 14:10:57.220082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.931 [2024-11-27 14:10:57.220115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:26.931 [2024-11-27 14:10:57.220133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.931 [2024-11-27 14:10:57.220705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.931 [2024-11-27 14:10:57.220742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.931 [2024-11-27 14:10:57.220877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.931 [2024-11-27 14:10:57.220918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.931 [2024-11-27 14:10:57.221061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.931 [2024-11-27 14:10:57.221081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:26.931 [2024-11-27 14:10:57.221382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.931 [2024-11-27 14:10:57.221561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.931 [2024-11-27 14:10:57.221576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:26.931 [2024-11-27 14:10:57.221738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.931 pt2 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.931 "name": "raid_bdev1", 00:11:26.931 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:26.931 "strip_size_kb": 64, 00:11:26.931 "state": "online", 00:11:26.931 "raid_level": "concat", 00:11:26.931 "superblock": true, 00:11:26.931 "num_base_bdevs": 2, 00:11:26.931 "num_base_bdevs_discovered": 2, 00:11:26.931 "num_base_bdevs_operational": 2, 00:11:26.931 "base_bdevs_list": [ 00:11:26.931 { 00:11:26.931 "name": "pt1", 00:11:26.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.931 "is_configured": true, 00:11:26.931 "data_offset": 2048, 00:11:26.931 "data_size": 63488 00:11:26.931 }, 00:11:26.931 { 00:11:26.931 "name": "pt2", 00:11:26.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.931 "is_configured": true, 00:11:26.931 "data_offset": 2048, 00:11:26.931 "data_size": 63488 00:11:26.931 } 00:11:26.931 ] 00:11:26.931 }' 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.931 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.498 [2024-11-27 14:10:57.724398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.498 "name": "raid_bdev1", 00:11:27.498 "aliases": [ 00:11:27.498 "89d99dae-1605-42da-bbe5-0dba041cee26" 00:11:27.498 ], 00:11:27.498 "product_name": "Raid Volume", 00:11:27.498 "block_size": 512, 00:11:27.498 "num_blocks": 126976, 00:11:27.498 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:27.498 "assigned_rate_limits": { 00:11:27.498 "rw_ios_per_sec": 0, 00:11:27.498 "rw_mbytes_per_sec": 0, 00:11:27.498 "r_mbytes_per_sec": 0, 00:11:27.498 "w_mbytes_per_sec": 0 00:11:27.498 }, 00:11:27.498 "claimed": false, 00:11:27.498 "zoned": false, 00:11:27.498 "supported_io_types": { 00:11:27.498 "read": true, 00:11:27.498 "write": true, 00:11:27.498 "unmap": true, 00:11:27.498 "flush": true, 00:11:27.498 "reset": true, 00:11:27.498 "nvme_admin": false, 00:11:27.498 "nvme_io": false, 00:11:27.498 "nvme_io_md": false, 00:11:27.498 "write_zeroes": true, 00:11:27.498 "zcopy": false, 00:11:27.498 "get_zone_info": false, 00:11:27.498 "zone_management": false, 00:11:27.498 "zone_append": false, 00:11:27.498 "compare": false, 00:11:27.498 "compare_and_write": false, 00:11:27.498 "abort": false, 00:11:27.498 "seek_hole": false, 00:11:27.498 "seek_data": false, 00:11:27.498 "copy": false, 00:11:27.498 "nvme_iov_md": false 00:11:27.498 }, 00:11:27.498 "memory_domains": [ 00:11:27.498 { 00:11:27.498 "dma_device_id": "system", 00:11:27.498 "dma_device_type": 1 00:11:27.498 }, 00:11:27.498 { 00:11:27.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.498 "dma_device_type": 2 00:11:27.498 }, 00:11:27.498 { 00:11:27.498 "dma_device_id": "system", 00:11:27.498 "dma_device_type": 1 00:11:27.498 }, 00:11:27.498 { 00:11:27.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.498 "dma_device_type": 2 00:11:27.498 } 00:11:27.498 ], 00:11:27.498 "driver_specific": { 00:11:27.498 "raid": { 00:11:27.498 "uuid": "89d99dae-1605-42da-bbe5-0dba041cee26", 00:11:27.498 "strip_size_kb": 64, 00:11:27.498 "state": "online", 00:11:27.498 "raid_level": "concat", 00:11:27.498 "superblock": true, 00:11:27.498 "num_base_bdevs": 2, 00:11:27.498 "num_base_bdevs_discovered": 2, 00:11:27.498 "num_base_bdevs_operational": 2, 00:11:27.498 "base_bdevs_list": [ 00:11:27.498 { 00:11:27.498 "name": "pt1", 00:11:27.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.498 "is_configured": true, 00:11:27.498 "data_offset": 2048, 00:11:27.498 "data_size": 63488 00:11:27.498 }, 00:11:27.498 { 00:11:27.498 "name": "pt2", 00:11:27.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.498 "is_configured": true, 00:11:27.498 "data_offset": 2048, 00:11:27.498 "data_size": 63488 00:11:27.498 } 00:11:27.498 ] 00:11:27.498 } 00:11:27.498 } 00:11:27.498 }' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.498 pt2' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.498 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.499 [2024-11-27 14:10:57.976448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.499 14:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 89d99dae-1605-42da-bbe5-0dba041cee26 '!=' 89d99dae-1605-42da-bbe5-0dba041cee26 ']' 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62310 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62310 ']' 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62310 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62310 00:11:27.758 killing process with pid 62310 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62310' 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62310 00:11:27.758 [2024-11-27 14:10:58.056288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.758 14:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62310 00:11:27.758 [2024-11-27 14:10:58.056404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.758 [2024-11-27 14:10:58.056477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.758 [2024-11-27 14:10:58.056497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:27.758 [2024-11-27 14:10:58.243641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.133 14:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:29.133 00:11:29.133 real 0m4.884s 00:11:29.133 user 0m7.218s 00:11:29.133 sys 0m0.690s 00:11:29.133 14:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.133 14:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.133 ************************************ 00:11:29.133 END TEST raid_superblock_test 00:11:29.133 ************************************ 00:11:29.133 14:10:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:29.133 14:10:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.133 14:10:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.133 14:10:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.133 ************************************ 00:11:29.133 START TEST raid_read_error_test 00:11:29.133 ************************************ 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qInOQZvj2Z 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62522 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62522 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62522 ']' 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.133 14:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.133 [2024-11-27 14:10:59.487182] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:29.133 [2024-11-27 14:10:59.487378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62522 ] 00:11:29.392 [2024-11-27 14:10:59.678192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.392 [2024-11-27 14:10:59.843856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.653 [2024-11-27 14:11:00.086589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.653 [2024-11-27 14:11:00.086671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 BaseBdev1_malloc 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 true 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 [2024-11-27 14:11:00.585724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.227 [2024-11-27 14:11:00.585795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.227 [2024-11-27 14:11:00.585846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:30.227 [2024-11-27 14:11:00.585876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.227 [2024-11-27 14:11:00.588738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.227 [2024-11-27 14:11:00.588995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.227 BaseBdev1 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 BaseBdev2_malloc 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 true 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 [2024-11-27 14:11:00.642927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.227 [2024-11-27 14:11:00.643132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.227 [2024-11-27 14:11:00.643168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.227 [2024-11-27 14:11:00.643187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.227 [2024-11-27 14:11:00.646560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.227 BaseBdev2 00:11:30.227 [2024-11-27 14:11:00.646737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.227 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.227 [2024-11-27 14:11:00.651109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.227 [2024-11-27 14:11:00.653724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.227 [2024-11-27 14:11:00.654042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:30.228 [2024-11-27 14:11:00.654068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:30.228 [2024-11-27 14:11:00.654374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:30.228 [2024-11-27 14:11:00.654641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:30.228 [2024-11-27 14:11:00.654664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:30.228 [2024-11-27 14:11:00.654889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.228 "name": "raid_bdev1", 00:11:30.228 "uuid": "f52d58b6-6bfe-4b03-894b-fecd3013044b", 00:11:30.228 "strip_size_kb": 64, 00:11:30.228 "state": "online", 00:11:30.228 "raid_level": "concat", 00:11:30.228 "superblock": true, 00:11:30.228 "num_base_bdevs": 2, 00:11:30.228 "num_base_bdevs_discovered": 2, 00:11:30.228 "num_base_bdevs_operational": 2, 00:11:30.228 "base_bdevs_list": [ 00:11:30.228 { 00:11:30.228 "name": "BaseBdev1", 00:11:30.228 "uuid": "28621eba-1576-5870-99bb-6fd04073ed70", 00:11:30.228 "is_configured": true, 00:11:30.228 "data_offset": 2048, 00:11:30.228 "data_size": 63488 00:11:30.228 }, 00:11:30.228 { 00:11:30.228 "name": "BaseBdev2", 00:11:30.228 "uuid": "0374f650-80e3-5482-bfa4-322dd21514b8", 00:11:30.228 "is_configured": true, 00:11:30.228 "data_offset": 2048, 00:11:30.228 "data_size": 63488 00:11:30.228 } 00:11:30.228 ] 00:11:30.228 }' 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.228 14:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.794 14:11:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:30.794 14:11:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:31.053 [2024-11-27 14:11:01.308702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.984 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.984 "name": "raid_bdev1", 00:11:31.984 "uuid": "f52d58b6-6bfe-4b03-894b-fecd3013044b", 00:11:31.984 "strip_size_kb": 64, 00:11:31.985 "state": "online", 00:11:31.985 "raid_level": "concat", 00:11:31.985 "superblock": true, 00:11:31.985 "num_base_bdevs": 2, 00:11:31.985 "num_base_bdevs_discovered": 2, 00:11:31.985 "num_base_bdevs_operational": 2, 00:11:31.985 "base_bdevs_list": [ 00:11:31.985 { 00:11:31.985 "name": "BaseBdev1", 00:11:31.985 "uuid": "28621eba-1576-5870-99bb-6fd04073ed70", 00:11:31.985 "is_configured": true, 00:11:31.985 "data_offset": 2048, 00:11:31.985 "data_size": 63488 00:11:31.985 }, 00:11:31.985 { 00:11:31.985 "name": "BaseBdev2", 00:11:31.985 "uuid": "0374f650-80e3-5482-bfa4-322dd21514b8", 00:11:31.985 "is_configured": true, 00:11:31.985 "data_offset": 2048, 00:11:31.985 "data_size": 63488 00:11:31.985 } 00:11:31.985 ] 00:11:31.985 }' 00:11:31.985 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.985 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.243 [2024-11-27 14:11:02.704356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.243 [2024-11-27 14:11:02.704604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.243 [2024-11-27 14:11:02.708211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.243 [2024-11-27 14:11:02.708326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.243 [2024-11-27 14:11:02.708376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.243 [2024-11-27 14:11:02.708396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:32.243 { 00:11:32.243 "results": [ 00:11:32.243 { 00:11:32.243 "job": "raid_bdev1", 00:11:32.243 "core_mask": "0x1", 00:11:32.243 "workload": "randrw", 00:11:32.243 "percentage": 50, 00:11:32.243 "status": "finished", 00:11:32.243 "queue_depth": 1, 00:11:32.243 "io_size": 131072, 00:11:32.243 "runtime": 1.393701, 00:11:32.243 "iops": 10347.987122058461, 00:11:32.243 "mibps": 1293.4983902573076, 00:11:32.243 "io_failed": 1, 00:11:32.243 "io_timeout": 0, 00:11:32.243 "avg_latency_us": 134.56349643561734, 00:11:32.243 "min_latency_us": 42.35636363636364, 00:11:32.243 "max_latency_us": 1846.9236363636364 00:11:32.243 } 00:11:32.243 ], 00:11:32.243 "core_count": 1 00:11:32.243 } 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62522 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62522 ']' 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62522 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62522 00:11:32.243 killing process with pid 62522 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62522' 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62522 00:11:32.243 14:11:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62522 00:11:32.243 [2024-11-27 14:11:02.747405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.500 [2024-11-27 14:11:02.884317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qInOQZvj2Z 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:33.938 ************************************ 00:11:33.938 END TEST raid_read_error_test 00:11:33.938 ************************************ 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:33.938 00:11:33.938 real 0m4.784s 00:11:33.938 user 0m5.984s 00:11:33.938 sys 0m0.574s 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.938 14:11:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.938 14:11:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:33.938 14:11:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:33.938 14:11:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.938 14:11:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.938 ************************************ 00:11:33.938 START TEST raid_write_error_test 00:11:33.938 ************************************ 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8YO78ZtTQf 00:11:33.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62673 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62673 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62673 ']' 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.938 14:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.938 [2024-11-27 14:11:04.299221] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:33.938 [2024-11-27 14:11:04.299452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:11:34.214 [2024-11-27 14:11:04.502656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.214 [2024-11-27 14:11:04.694314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.472 [2024-11-27 14:11:04.935831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.472 [2024-11-27 14:11:04.935906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.036 BaseBdev1_malloc 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.036 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.037 true 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.037 [2024-11-27 14:11:05.473585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:35.037 [2024-11-27 14:11:05.473656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.037 [2024-11-27 14:11:05.473686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:35.037 [2024-11-27 14:11:05.473703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.037 [2024-11-27 14:11:05.476968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.037 [2024-11-27 14:11:05.477029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.037 BaseBdev1 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.037 BaseBdev2_malloc 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.037 true 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.037 [2024-11-27 14:11:05.536921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:35.037 [2024-11-27 14:11:05.537134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.037 [2024-11-27 14:11:05.537180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:35.037 [2024-11-27 14:11:05.537203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.037 [2024-11-27 14:11:05.540254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.037 [2024-11-27 14:11:05.540436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:35.037 BaseBdev2 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.037 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.037 [2024-11-27 14:11:05.545129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.295 [2024-11-27 14:11:05.547945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.295 [2024-11-27 14:11:05.548257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.295 [2024-11-27 14:11:05.548293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:35.295 [2024-11-27 14:11:05.548632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:35.295 [2024-11-27 14:11:05.548887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.295 [2024-11-27 14:11:05.548911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:35.295 [2024-11-27 14:11:05.549155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.295 "name": "raid_bdev1", 00:11:35.295 "uuid": "ba254f48-d1b2-4f73-a8bb-b888b4aa7f3a", 00:11:35.295 "strip_size_kb": 64, 00:11:35.295 "state": "online", 00:11:35.295 "raid_level": "concat", 00:11:35.295 "superblock": true, 00:11:35.295 "num_base_bdevs": 2, 00:11:35.295 "num_base_bdevs_discovered": 2, 00:11:35.295 "num_base_bdevs_operational": 2, 00:11:35.295 "base_bdevs_list": [ 00:11:35.295 { 00:11:35.295 "name": "BaseBdev1", 00:11:35.295 "uuid": "7edb7cc7-136b-5412-9c4e-d355e54359a4", 00:11:35.295 "is_configured": true, 00:11:35.295 "data_offset": 2048, 00:11:35.295 "data_size": 63488 00:11:35.295 }, 00:11:35.295 { 00:11:35.295 "name": "BaseBdev2", 00:11:35.295 "uuid": "468a6b24-b515-57a8-9049-ecf41768632d", 00:11:35.295 "is_configured": true, 00:11:35.295 "data_offset": 2048, 00:11:35.295 "data_size": 63488 00:11:35.295 } 00:11:35.295 ] 00:11:35.295 }' 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.295 14:11:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.553 14:11:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:35.553 14:11:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:35.810 [2024-11-27 14:11:06.211388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.742 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.742 "name": "raid_bdev1", 00:11:36.743 "uuid": "ba254f48-d1b2-4f73-a8bb-b888b4aa7f3a", 00:11:36.743 "strip_size_kb": 64, 00:11:36.743 "state": "online", 00:11:36.743 "raid_level": "concat", 00:11:36.743 "superblock": true, 00:11:36.743 "num_base_bdevs": 2, 00:11:36.743 "num_base_bdevs_discovered": 2, 00:11:36.743 "num_base_bdevs_operational": 2, 00:11:36.743 "base_bdevs_list": [ 00:11:36.743 { 00:11:36.743 "name": "BaseBdev1", 00:11:36.743 "uuid": "7edb7cc7-136b-5412-9c4e-d355e54359a4", 00:11:36.743 "is_configured": true, 00:11:36.743 "data_offset": 2048, 00:11:36.743 "data_size": 63488 00:11:36.743 }, 00:11:36.743 { 00:11:36.743 "name": "BaseBdev2", 00:11:36.743 "uuid": "468a6b24-b515-57a8-9049-ecf41768632d", 00:11:36.743 "is_configured": true, 00:11:36.743 "data_offset": 2048, 00:11:36.743 "data_size": 63488 00:11:36.743 } 00:11:36.743 ] 00:11:36.743 }' 00:11:36.743 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.743 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.308 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.308 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.308 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.308 [2024-11-27 14:11:07.586786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.308 [2024-11-27 14:11:07.586868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.308 { 00:11:37.308 "results": [ 00:11:37.308 { 00:11:37.308 "job": "raid_bdev1", 00:11:37.308 "core_mask": "0x1", 00:11:37.308 "workload": "randrw", 00:11:37.308 "percentage": 50, 00:11:37.308 "status": "finished", 00:11:37.308 "queue_depth": 1, 00:11:37.308 "io_size": 131072, 00:11:37.308 "runtime": 1.37265, 00:11:37.308 "iops": 8995.009652861254, 00:11:37.309 "mibps": 1124.3762066076567, 00:11:37.309 "io_failed": 1, 00:11:37.309 "io_timeout": 0, 00:11:37.309 "avg_latency_us": 154.857967429396, 00:11:37.309 "min_latency_us": 39.33090909090909, 00:11:37.309 "max_latency_us": 2323.549090909091 00:11:37.309 } 00:11:37.309 ], 00:11:37.309 "core_count": 1 00:11:37.309 } 00:11:37.309 [2024-11-27 14:11:07.591550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.309 [2024-11-27 14:11:07.591701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.309 [2024-11-27 14:11:07.591765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.309 [2024-11-27 14:11:07.591800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62673 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62673 ']' 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62673 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62673 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.309 killing process with pid 62673 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62673' 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62673 00:11:37.309 14:11:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62673 00:11:37.309 [2024-11-27 14:11:07.629587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.309 [2024-11-27 14:11:07.769048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8YO78ZtTQf 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:38.748 ************************************ 00:11:38.748 END TEST raid_write_error_test 00:11:38.748 ************************************ 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:38.748 00:11:38.748 real 0m4.805s 00:11:38.748 user 0m6.050s 00:11:38.748 sys 0m0.584s 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.748 14:11:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.748 14:11:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:38.748 14:11:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:38.748 14:11:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:38.748 14:11:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.748 14:11:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.748 ************************************ 00:11:38.748 START TEST raid_state_function_test 00:11:38.748 ************************************ 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62817 00:11:38.748 Process raid pid: 62817 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62817' 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62817 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62817 ']' 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.748 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.748 [2024-11-27 14:11:09.162294] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:38.748 [2024-11-27 14:11:09.162485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.005 [2024-11-27 14:11:09.360843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.263 [2024-11-27 14:11:09.561048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.521 [2024-11-27 14:11:09.824693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.521 [2024-11-27 14:11:09.824756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.778 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.778 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.778 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:39.778 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.778 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.778 [2024-11-27 14:11:10.268283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.778 [2024-11-27 14:11:10.268367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.778 [2024-11-27 14:11:10.268396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.778 [2024-11-27 14:11:10.268423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.779 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.037 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.037 "name": "Existed_Raid", 00:11:40.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.037 "strip_size_kb": 0, 00:11:40.037 "state": "configuring", 00:11:40.037 "raid_level": "raid1", 00:11:40.037 "superblock": false, 00:11:40.037 "num_base_bdevs": 2, 00:11:40.037 "num_base_bdevs_discovered": 0, 00:11:40.037 "num_base_bdevs_operational": 2, 00:11:40.037 "base_bdevs_list": [ 00:11:40.037 { 00:11:40.037 "name": "BaseBdev1", 00:11:40.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.037 "is_configured": false, 00:11:40.037 "data_offset": 0, 00:11:40.037 "data_size": 0 00:11:40.037 }, 00:11:40.037 { 00:11:40.037 "name": "BaseBdev2", 00:11:40.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.037 "is_configured": false, 00:11:40.037 "data_offset": 0, 00:11:40.037 "data_size": 0 00:11:40.037 } 00:11:40.037 ] 00:11:40.037 }' 00:11:40.037 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.037 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 [2024-11-27 14:11:10.832302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.652 [2024-11-27 14:11:10.832351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 [2024-11-27 14:11:10.840266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.652 [2024-11-27 14:11:10.840318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.652 [2024-11-27 14:11:10.840333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.652 [2024-11-27 14:11:10.840352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 [2024-11-27 14:11:10.886153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.652 BaseBdev1 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 [ 00:11:40.652 { 00:11:40.652 "name": "BaseBdev1", 00:11:40.652 "aliases": [ 00:11:40.652 "ccb6a821-b08e-4c53-97b7-b762b8b437cf" 00:11:40.652 ], 00:11:40.652 "product_name": "Malloc disk", 00:11:40.652 "block_size": 512, 00:11:40.652 "num_blocks": 65536, 00:11:40.652 "uuid": "ccb6a821-b08e-4c53-97b7-b762b8b437cf", 00:11:40.652 "assigned_rate_limits": { 00:11:40.652 "rw_ios_per_sec": 0, 00:11:40.652 "rw_mbytes_per_sec": 0, 00:11:40.652 "r_mbytes_per_sec": 0, 00:11:40.652 "w_mbytes_per_sec": 0 00:11:40.652 }, 00:11:40.652 "claimed": true, 00:11:40.652 "claim_type": "exclusive_write", 00:11:40.652 "zoned": false, 00:11:40.652 "supported_io_types": { 00:11:40.652 "read": true, 00:11:40.652 "write": true, 00:11:40.652 "unmap": true, 00:11:40.652 "flush": true, 00:11:40.652 "reset": true, 00:11:40.652 "nvme_admin": false, 00:11:40.652 "nvme_io": false, 00:11:40.652 "nvme_io_md": false, 00:11:40.652 "write_zeroes": true, 00:11:40.652 "zcopy": true, 00:11:40.652 "get_zone_info": false, 00:11:40.652 "zone_management": false, 00:11:40.652 "zone_append": false, 00:11:40.652 "compare": false, 00:11:40.652 "compare_and_write": false, 00:11:40.652 "abort": true, 00:11:40.652 "seek_hole": false, 00:11:40.652 "seek_data": false, 00:11:40.652 "copy": true, 00:11:40.652 "nvme_iov_md": false 00:11:40.652 }, 00:11:40.652 "memory_domains": [ 00:11:40.652 { 00:11:40.652 "dma_device_id": "system", 00:11:40.652 "dma_device_type": 1 00:11:40.652 }, 00:11:40.652 { 00:11:40.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.652 "dma_device_type": 2 00:11:40.652 } 00:11:40.652 ], 00:11:40.652 "driver_specific": {} 00:11:40.652 } 00:11:40.652 ] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:40.652 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.653 "name": "Existed_Raid", 00:11:40.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.653 "strip_size_kb": 0, 00:11:40.653 "state": "configuring", 00:11:40.653 "raid_level": "raid1", 00:11:40.653 "superblock": false, 00:11:40.653 "num_base_bdevs": 2, 00:11:40.653 "num_base_bdevs_discovered": 1, 00:11:40.653 "num_base_bdevs_operational": 2, 00:11:40.653 "base_bdevs_list": [ 00:11:40.653 { 00:11:40.653 "name": "BaseBdev1", 00:11:40.653 "uuid": "ccb6a821-b08e-4c53-97b7-b762b8b437cf", 00:11:40.653 "is_configured": true, 00:11:40.653 "data_offset": 0, 00:11:40.653 "data_size": 65536 00:11:40.653 }, 00:11:40.653 { 00:11:40.653 "name": "BaseBdev2", 00:11:40.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.653 "is_configured": false, 00:11:40.653 "data_offset": 0, 00:11:40.653 "data_size": 0 00:11:40.653 } 00:11:40.653 ] 00:11:40.653 }' 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.653 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.221 [2024-11-27 14:11:11.474357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.221 [2024-11-27 14:11:11.474421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.221 [2024-11-27 14:11:11.482409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.221 [2024-11-27 14:11:11.484926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.221 [2024-11-27 14:11:11.484975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.221 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.221 "name": "Existed_Raid", 00:11:41.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.221 "strip_size_kb": 0, 00:11:41.221 "state": "configuring", 00:11:41.221 "raid_level": "raid1", 00:11:41.221 "superblock": false, 00:11:41.221 "num_base_bdevs": 2, 00:11:41.221 "num_base_bdevs_discovered": 1, 00:11:41.221 "num_base_bdevs_operational": 2, 00:11:41.221 "base_bdevs_list": [ 00:11:41.221 { 00:11:41.221 "name": "BaseBdev1", 00:11:41.221 "uuid": "ccb6a821-b08e-4c53-97b7-b762b8b437cf", 00:11:41.221 "is_configured": true, 00:11:41.221 "data_offset": 0, 00:11:41.221 "data_size": 65536 00:11:41.221 }, 00:11:41.221 { 00:11:41.221 "name": "BaseBdev2", 00:11:41.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.221 "is_configured": false, 00:11:41.222 "data_offset": 0, 00:11:41.222 "data_size": 0 00:11:41.222 } 00:11:41.222 ] 00:11:41.222 }' 00:11:41.222 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.222 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.790 [2024-11-27 14:11:12.097464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.790 [2024-11-27 14:11:12.097536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.790 [2024-11-27 14:11:12.097551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:41.790 [2024-11-27 14:11:12.097933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:41.790 [2024-11-27 14:11:12.098215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.790 [2024-11-27 14:11:12.098238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:41.790 [2024-11-27 14:11:12.098564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.790 BaseBdev2 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.790 [ 00:11:41.790 { 00:11:41.790 "name": "BaseBdev2", 00:11:41.790 "aliases": [ 00:11:41.790 "b557eec7-1664-423e-8e5a-15a0ce4b4db5" 00:11:41.790 ], 00:11:41.790 "product_name": "Malloc disk", 00:11:41.790 "block_size": 512, 00:11:41.790 "num_blocks": 65536, 00:11:41.790 "uuid": "b557eec7-1664-423e-8e5a-15a0ce4b4db5", 00:11:41.790 "assigned_rate_limits": { 00:11:41.790 "rw_ios_per_sec": 0, 00:11:41.790 "rw_mbytes_per_sec": 0, 00:11:41.790 "r_mbytes_per_sec": 0, 00:11:41.790 "w_mbytes_per_sec": 0 00:11:41.790 }, 00:11:41.790 "claimed": true, 00:11:41.790 "claim_type": "exclusive_write", 00:11:41.790 "zoned": false, 00:11:41.790 "supported_io_types": { 00:11:41.790 "read": true, 00:11:41.790 "write": true, 00:11:41.790 "unmap": true, 00:11:41.790 "flush": true, 00:11:41.790 "reset": true, 00:11:41.790 "nvme_admin": false, 00:11:41.790 "nvme_io": false, 00:11:41.790 "nvme_io_md": false, 00:11:41.790 "write_zeroes": true, 00:11:41.790 "zcopy": true, 00:11:41.790 "get_zone_info": false, 00:11:41.790 "zone_management": false, 00:11:41.790 "zone_append": false, 00:11:41.790 "compare": false, 00:11:41.790 "compare_and_write": false, 00:11:41.790 "abort": true, 00:11:41.790 "seek_hole": false, 00:11:41.790 "seek_data": false, 00:11:41.790 "copy": true, 00:11:41.790 "nvme_iov_md": false 00:11:41.790 }, 00:11:41.790 "memory_domains": [ 00:11:41.790 { 00:11:41.790 "dma_device_id": "system", 00:11:41.790 "dma_device_type": 1 00:11:41.790 }, 00:11:41.790 { 00:11:41.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.790 "dma_device_type": 2 00:11:41.790 } 00:11:41.790 ], 00:11:41.790 "driver_specific": {} 00:11:41.790 } 00:11:41.790 ] 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.790 "name": "Existed_Raid", 00:11:41.790 "uuid": "1a1afeb2-7ce9-4b55-a881-59f9e1fc7c2f", 00:11:41.790 "strip_size_kb": 0, 00:11:41.790 "state": "online", 00:11:41.790 "raid_level": "raid1", 00:11:41.790 "superblock": false, 00:11:41.790 "num_base_bdevs": 2, 00:11:41.790 "num_base_bdevs_discovered": 2, 00:11:41.790 "num_base_bdevs_operational": 2, 00:11:41.790 "base_bdevs_list": [ 00:11:41.790 { 00:11:41.790 "name": "BaseBdev1", 00:11:41.790 "uuid": "ccb6a821-b08e-4c53-97b7-b762b8b437cf", 00:11:41.790 "is_configured": true, 00:11:41.790 "data_offset": 0, 00:11:41.790 "data_size": 65536 00:11:41.790 }, 00:11:41.790 { 00:11:41.790 "name": "BaseBdev2", 00:11:41.790 "uuid": "b557eec7-1664-423e-8e5a-15a0ce4b4db5", 00:11:41.790 "is_configured": true, 00:11:41.790 "data_offset": 0, 00:11:41.790 "data_size": 65536 00:11:41.790 } 00:11:41.790 ] 00:11:41.790 }' 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.790 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.374 [2024-11-27 14:11:12.654066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.374 "name": "Existed_Raid", 00:11:42.374 "aliases": [ 00:11:42.374 "1a1afeb2-7ce9-4b55-a881-59f9e1fc7c2f" 00:11:42.374 ], 00:11:42.374 "product_name": "Raid Volume", 00:11:42.374 "block_size": 512, 00:11:42.374 "num_blocks": 65536, 00:11:42.374 "uuid": "1a1afeb2-7ce9-4b55-a881-59f9e1fc7c2f", 00:11:42.374 "assigned_rate_limits": { 00:11:42.374 "rw_ios_per_sec": 0, 00:11:42.374 "rw_mbytes_per_sec": 0, 00:11:42.374 "r_mbytes_per_sec": 0, 00:11:42.374 "w_mbytes_per_sec": 0 00:11:42.374 }, 00:11:42.374 "claimed": false, 00:11:42.374 "zoned": false, 00:11:42.374 "supported_io_types": { 00:11:42.374 "read": true, 00:11:42.374 "write": true, 00:11:42.374 "unmap": false, 00:11:42.374 "flush": false, 00:11:42.374 "reset": true, 00:11:42.374 "nvme_admin": false, 00:11:42.374 "nvme_io": false, 00:11:42.374 "nvme_io_md": false, 00:11:42.374 "write_zeroes": true, 00:11:42.374 "zcopy": false, 00:11:42.374 "get_zone_info": false, 00:11:42.374 "zone_management": false, 00:11:42.374 "zone_append": false, 00:11:42.374 "compare": false, 00:11:42.374 "compare_and_write": false, 00:11:42.374 "abort": false, 00:11:42.374 "seek_hole": false, 00:11:42.374 "seek_data": false, 00:11:42.374 "copy": false, 00:11:42.374 "nvme_iov_md": false 00:11:42.374 }, 00:11:42.374 "memory_domains": [ 00:11:42.374 { 00:11:42.374 "dma_device_id": "system", 00:11:42.374 "dma_device_type": 1 00:11:42.374 }, 00:11:42.374 { 00:11:42.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.374 "dma_device_type": 2 00:11:42.374 }, 00:11:42.374 { 00:11:42.374 "dma_device_id": "system", 00:11:42.374 "dma_device_type": 1 00:11:42.374 }, 00:11:42.374 { 00:11:42.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.374 "dma_device_type": 2 00:11:42.374 } 00:11:42.374 ], 00:11:42.374 "driver_specific": { 00:11:42.374 "raid": { 00:11:42.374 "uuid": "1a1afeb2-7ce9-4b55-a881-59f9e1fc7c2f", 00:11:42.374 "strip_size_kb": 0, 00:11:42.374 "state": "online", 00:11:42.374 "raid_level": "raid1", 00:11:42.374 "superblock": false, 00:11:42.374 "num_base_bdevs": 2, 00:11:42.374 "num_base_bdevs_discovered": 2, 00:11:42.374 "num_base_bdevs_operational": 2, 00:11:42.374 "base_bdevs_list": [ 00:11:42.374 { 00:11:42.374 "name": "BaseBdev1", 00:11:42.374 "uuid": "ccb6a821-b08e-4c53-97b7-b762b8b437cf", 00:11:42.374 "is_configured": true, 00:11:42.374 "data_offset": 0, 00:11:42.374 "data_size": 65536 00:11:42.374 }, 00:11:42.374 { 00:11:42.374 "name": "BaseBdev2", 00:11:42.374 "uuid": "b557eec7-1664-423e-8e5a-15a0ce4b4db5", 00:11:42.374 "is_configured": true, 00:11:42.374 "data_offset": 0, 00:11:42.374 "data_size": 65536 00:11:42.374 } 00:11:42.374 ] 00:11:42.374 } 00:11:42.374 } 00:11:42.374 }' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:42.374 BaseBdev2' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.374 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.632 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.633 [2024-11-27 14:11:12.905801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.633 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.633 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.633 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.633 "name": "Existed_Raid", 00:11:42.633 "uuid": "1a1afeb2-7ce9-4b55-a881-59f9e1fc7c2f", 00:11:42.633 "strip_size_kb": 0, 00:11:42.633 "state": "online", 00:11:42.633 "raid_level": "raid1", 00:11:42.633 "superblock": false, 00:11:42.633 "num_base_bdevs": 2, 00:11:42.633 "num_base_bdevs_discovered": 1, 00:11:42.633 "num_base_bdevs_operational": 1, 00:11:42.633 "base_bdevs_list": [ 00:11:42.633 { 00:11:42.633 "name": null, 00:11:42.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.633 "is_configured": false, 00:11:42.633 "data_offset": 0, 00:11:42.633 "data_size": 65536 00:11:42.633 }, 00:11:42.633 { 00:11:42.633 "name": "BaseBdev2", 00:11:42.633 "uuid": "b557eec7-1664-423e-8e5a-15a0ce4b4db5", 00:11:42.633 "is_configured": true, 00:11:42.633 "data_offset": 0, 00:11:42.633 "data_size": 65536 00:11:42.633 } 00:11:42.633 ] 00:11:42.633 }' 00:11:42.633 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.633 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.198 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.198 [2024-11-27 14:11:13.606322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.198 [2024-11-27 14:11:13.606495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.455 [2024-11-27 14:11:13.739937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.455 [2024-11-27 14:11:13.740046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.455 [2024-11-27 14:11:13.740084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62817 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62817 ']' 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62817 00:11:43.455 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62817 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.456 killing process with pid 62817 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62817' 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62817 00:11:43.456 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62817 00:11:43.456 [2024-11-27 14:11:13.835891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.456 [2024-11-27 14:11:13.852406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.830 00:11:44.830 real 0m5.960s 00:11:44.830 user 0m9.067s 00:11:44.830 sys 0m0.797s 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.830 ************************************ 00:11:44.830 END TEST raid_state_function_test 00:11:44.830 ************************************ 00:11:44.830 14:11:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:44.830 14:11:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.830 14:11:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.830 14:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.830 ************************************ 00:11:44.830 START TEST raid_state_function_test_sb 00:11:44.830 ************************************ 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63081 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63081' 00:11:44.830 Process raid pid: 63081 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63081 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63081 ']' 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.830 14:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.830 [2024-11-27 14:11:15.160904] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:44.830 [2024-11-27 14:11:15.161272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.830 [2024-11-27 14:11:15.338647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.089 [2024-11-27 14:11:15.473655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.348 [2024-11-27 14:11:15.692288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.348 [2024-11-27 14:11:15.692344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.919 [2024-11-27 14:11:16.150743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.919 [2024-11-27 14:11:16.150812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.919 [2024-11-27 14:11:16.150851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.919 [2024-11-27 14:11:16.150869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.919 "name": "Existed_Raid", 00:11:45.919 "uuid": "58bb047c-4640-4fb0-bdd4-11b5d2b83f3a", 00:11:45.919 "strip_size_kb": 0, 00:11:45.919 "state": "configuring", 00:11:45.919 "raid_level": "raid1", 00:11:45.919 "superblock": true, 00:11:45.919 "num_base_bdevs": 2, 00:11:45.919 "num_base_bdevs_discovered": 0, 00:11:45.919 "num_base_bdevs_operational": 2, 00:11:45.919 "base_bdevs_list": [ 00:11:45.919 { 00:11:45.919 "name": "BaseBdev1", 00:11:45.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.919 "is_configured": false, 00:11:45.919 "data_offset": 0, 00:11:45.919 "data_size": 0 00:11:45.919 }, 00:11:45.919 { 00:11:45.919 "name": "BaseBdev2", 00:11:45.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.919 "is_configured": false, 00:11:45.919 "data_offset": 0, 00:11:45.919 "data_size": 0 00:11:45.919 } 00:11:45.919 ] 00:11:45.919 }' 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.919 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.178 [2024-11-27 14:11:16.654791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.178 [2024-11-27 14:11:16.654983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.178 [2024-11-27 14:11:16.662763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.178 [2024-11-27 14:11:16.662943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.178 [2024-11-27 14:11:16.662970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.178 [2024-11-27 14:11:16.662991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.178 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.437 [2024-11-27 14:11:16.708963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.437 BaseBdev1 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.437 [ 00:11:46.437 { 00:11:46.437 "name": "BaseBdev1", 00:11:46.437 "aliases": [ 00:11:46.437 "06e940f7-7f63-4374-8cf2-16168c95a2a1" 00:11:46.437 ], 00:11:46.437 "product_name": "Malloc disk", 00:11:46.437 "block_size": 512, 00:11:46.437 "num_blocks": 65536, 00:11:46.437 "uuid": "06e940f7-7f63-4374-8cf2-16168c95a2a1", 00:11:46.437 "assigned_rate_limits": { 00:11:46.437 "rw_ios_per_sec": 0, 00:11:46.437 "rw_mbytes_per_sec": 0, 00:11:46.437 "r_mbytes_per_sec": 0, 00:11:46.437 "w_mbytes_per_sec": 0 00:11:46.437 }, 00:11:46.437 "claimed": true, 00:11:46.437 "claim_type": "exclusive_write", 00:11:46.437 "zoned": false, 00:11:46.437 "supported_io_types": { 00:11:46.437 "read": true, 00:11:46.437 "write": true, 00:11:46.437 "unmap": true, 00:11:46.437 "flush": true, 00:11:46.437 "reset": true, 00:11:46.437 "nvme_admin": false, 00:11:46.437 "nvme_io": false, 00:11:46.437 "nvme_io_md": false, 00:11:46.437 "write_zeroes": true, 00:11:46.437 "zcopy": true, 00:11:46.437 "get_zone_info": false, 00:11:46.437 "zone_management": false, 00:11:46.437 "zone_append": false, 00:11:46.437 "compare": false, 00:11:46.437 "compare_and_write": false, 00:11:46.437 "abort": true, 00:11:46.437 "seek_hole": false, 00:11:46.437 "seek_data": false, 00:11:46.437 "copy": true, 00:11:46.437 "nvme_iov_md": false 00:11:46.437 }, 00:11:46.437 "memory_domains": [ 00:11:46.437 { 00:11:46.437 "dma_device_id": "system", 00:11:46.437 "dma_device_type": 1 00:11:46.437 }, 00:11:46.437 { 00:11:46.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.437 "dma_device_type": 2 00:11:46.437 } 00:11:46.437 ], 00:11:46.437 "driver_specific": {} 00:11:46.437 } 00:11:46.437 ] 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.437 "name": "Existed_Raid", 00:11:46.437 "uuid": "77d06aab-bddc-4a93-a6df-78da02814b4a", 00:11:46.437 "strip_size_kb": 0, 00:11:46.437 "state": "configuring", 00:11:46.437 "raid_level": "raid1", 00:11:46.437 "superblock": true, 00:11:46.437 "num_base_bdevs": 2, 00:11:46.437 "num_base_bdevs_discovered": 1, 00:11:46.437 "num_base_bdevs_operational": 2, 00:11:46.437 "base_bdevs_list": [ 00:11:46.437 { 00:11:46.437 "name": "BaseBdev1", 00:11:46.437 "uuid": "06e940f7-7f63-4374-8cf2-16168c95a2a1", 00:11:46.437 "is_configured": true, 00:11:46.437 "data_offset": 2048, 00:11:46.437 "data_size": 63488 00:11:46.437 }, 00:11:46.437 { 00:11:46.437 "name": "BaseBdev2", 00:11:46.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.437 "is_configured": false, 00:11:46.437 "data_offset": 0, 00:11:46.437 "data_size": 0 00:11:46.437 } 00:11:46.437 ] 00:11:46.437 }' 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.437 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.004 [2024-11-27 14:11:17.265236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.004 [2024-11-27 14:11:17.265298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.004 [2024-11-27 14:11:17.277299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.004 [2024-11-27 14:11:17.280054] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.004 [2024-11-27 14:11:17.280241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.004 "name": "Existed_Raid", 00:11:47.004 "uuid": "c1e537fb-1b50-4c99-b9a2-339ae8327930", 00:11:47.004 "strip_size_kb": 0, 00:11:47.004 "state": "configuring", 00:11:47.004 "raid_level": "raid1", 00:11:47.004 "superblock": true, 00:11:47.004 "num_base_bdevs": 2, 00:11:47.004 "num_base_bdevs_discovered": 1, 00:11:47.004 "num_base_bdevs_operational": 2, 00:11:47.004 "base_bdevs_list": [ 00:11:47.004 { 00:11:47.004 "name": "BaseBdev1", 00:11:47.004 "uuid": "06e940f7-7f63-4374-8cf2-16168c95a2a1", 00:11:47.004 "is_configured": true, 00:11:47.004 "data_offset": 2048, 00:11:47.004 "data_size": 63488 00:11:47.004 }, 00:11:47.004 { 00:11:47.004 "name": "BaseBdev2", 00:11:47.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.004 "is_configured": false, 00:11:47.004 "data_offset": 0, 00:11:47.004 "data_size": 0 00:11:47.004 } 00:11:47.004 ] 00:11:47.004 }' 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.004 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 [2024-11-27 14:11:17.856402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.571 [2024-11-27 14:11:17.856951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.571 [2024-11-27 14:11:17.856977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.571 BaseBdev2 00:11:47.571 [2024-11-27 14:11:17.857301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:47.571 [2024-11-27 14:11:17.857517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.571 [2024-11-27 14:11:17.857542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:47.571 [2024-11-27 14:11:17.857724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.571 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.571 [ 00:11:47.571 { 00:11:47.571 "name": "BaseBdev2", 00:11:47.571 "aliases": [ 00:11:47.571 "aa790a60-e33f-47f7-afaf-8fc074ffc974" 00:11:47.571 ], 00:11:47.571 "product_name": "Malloc disk", 00:11:47.571 "block_size": 512, 00:11:47.571 "num_blocks": 65536, 00:11:47.571 "uuid": "aa790a60-e33f-47f7-afaf-8fc074ffc974", 00:11:47.571 "assigned_rate_limits": { 00:11:47.571 "rw_ios_per_sec": 0, 00:11:47.571 "rw_mbytes_per_sec": 0, 00:11:47.571 "r_mbytes_per_sec": 0, 00:11:47.571 "w_mbytes_per_sec": 0 00:11:47.571 }, 00:11:47.571 "claimed": true, 00:11:47.571 "claim_type": "exclusive_write", 00:11:47.571 "zoned": false, 00:11:47.571 "supported_io_types": { 00:11:47.571 "read": true, 00:11:47.571 "write": true, 00:11:47.571 "unmap": true, 00:11:47.571 "flush": true, 00:11:47.571 "reset": true, 00:11:47.572 "nvme_admin": false, 00:11:47.572 "nvme_io": false, 00:11:47.572 "nvme_io_md": false, 00:11:47.572 "write_zeroes": true, 00:11:47.572 "zcopy": true, 00:11:47.572 "get_zone_info": false, 00:11:47.572 "zone_management": false, 00:11:47.572 "zone_append": false, 00:11:47.572 "compare": false, 00:11:47.572 "compare_and_write": false, 00:11:47.572 "abort": true, 00:11:47.572 "seek_hole": false, 00:11:47.572 "seek_data": false, 00:11:47.572 "copy": true, 00:11:47.572 "nvme_iov_md": false 00:11:47.572 }, 00:11:47.572 "memory_domains": [ 00:11:47.572 { 00:11:47.572 "dma_device_id": "system", 00:11:47.572 "dma_device_type": 1 00:11:47.572 }, 00:11:47.572 { 00:11:47.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.572 "dma_device_type": 2 00:11:47.572 } 00:11:47.572 ], 00:11:47.572 "driver_specific": {} 00:11:47.572 } 00:11:47.572 ] 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.572 "name": "Existed_Raid", 00:11:47.572 "uuid": "c1e537fb-1b50-4c99-b9a2-339ae8327930", 00:11:47.572 "strip_size_kb": 0, 00:11:47.572 "state": "online", 00:11:47.572 "raid_level": "raid1", 00:11:47.572 "superblock": true, 00:11:47.572 "num_base_bdevs": 2, 00:11:47.572 "num_base_bdevs_discovered": 2, 00:11:47.572 "num_base_bdevs_operational": 2, 00:11:47.572 "base_bdevs_list": [ 00:11:47.572 { 00:11:47.572 "name": "BaseBdev1", 00:11:47.572 "uuid": "06e940f7-7f63-4374-8cf2-16168c95a2a1", 00:11:47.572 "is_configured": true, 00:11:47.572 "data_offset": 2048, 00:11:47.572 "data_size": 63488 00:11:47.572 }, 00:11:47.572 { 00:11:47.572 "name": "BaseBdev2", 00:11:47.572 "uuid": "aa790a60-e33f-47f7-afaf-8fc074ffc974", 00:11:47.572 "is_configured": true, 00:11:47.572 "data_offset": 2048, 00:11:47.572 "data_size": 63488 00:11:47.572 } 00:11:47.572 ] 00:11:47.572 }' 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.572 14:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 [2024-11-27 14:11:18.400959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.139 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.139 "name": "Existed_Raid", 00:11:48.139 "aliases": [ 00:11:48.139 "c1e537fb-1b50-4c99-b9a2-339ae8327930" 00:11:48.139 ], 00:11:48.139 "product_name": "Raid Volume", 00:11:48.139 "block_size": 512, 00:11:48.139 "num_blocks": 63488, 00:11:48.139 "uuid": "c1e537fb-1b50-4c99-b9a2-339ae8327930", 00:11:48.139 "assigned_rate_limits": { 00:11:48.139 "rw_ios_per_sec": 0, 00:11:48.139 "rw_mbytes_per_sec": 0, 00:11:48.139 "r_mbytes_per_sec": 0, 00:11:48.139 "w_mbytes_per_sec": 0 00:11:48.139 }, 00:11:48.139 "claimed": false, 00:11:48.139 "zoned": false, 00:11:48.139 "supported_io_types": { 00:11:48.139 "read": true, 00:11:48.139 "write": true, 00:11:48.139 "unmap": false, 00:11:48.139 "flush": false, 00:11:48.139 "reset": true, 00:11:48.139 "nvme_admin": false, 00:11:48.139 "nvme_io": false, 00:11:48.139 "nvme_io_md": false, 00:11:48.139 "write_zeroes": true, 00:11:48.139 "zcopy": false, 00:11:48.139 "get_zone_info": false, 00:11:48.139 "zone_management": false, 00:11:48.139 "zone_append": false, 00:11:48.139 "compare": false, 00:11:48.139 "compare_and_write": false, 00:11:48.139 "abort": false, 00:11:48.139 "seek_hole": false, 00:11:48.139 "seek_data": false, 00:11:48.139 "copy": false, 00:11:48.140 "nvme_iov_md": false 00:11:48.140 }, 00:11:48.140 "memory_domains": [ 00:11:48.140 { 00:11:48.140 "dma_device_id": "system", 00:11:48.140 "dma_device_type": 1 00:11:48.140 }, 00:11:48.140 { 00:11:48.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.140 "dma_device_type": 2 00:11:48.140 }, 00:11:48.140 { 00:11:48.140 "dma_device_id": "system", 00:11:48.140 "dma_device_type": 1 00:11:48.140 }, 00:11:48.140 { 00:11:48.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.140 "dma_device_type": 2 00:11:48.140 } 00:11:48.140 ], 00:11:48.140 "driver_specific": { 00:11:48.140 "raid": { 00:11:48.140 "uuid": "c1e537fb-1b50-4c99-b9a2-339ae8327930", 00:11:48.140 "strip_size_kb": 0, 00:11:48.140 "state": "online", 00:11:48.140 "raid_level": "raid1", 00:11:48.140 "superblock": true, 00:11:48.140 "num_base_bdevs": 2, 00:11:48.140 "num_base_bdevs_discovered": 2, 00:11:48.140 "num_base_bdevs_operational": 2, 00:11:48.140 "base_bdevs_list": [ 00:11:48.140 { 00:11:48.140 "name": "BaseBdev1", 00:11:48.140 "uuid": "06e940f7-7f63-4374-8cf2-16168c95a2a1", 00:11:48.140 "is_configured": true, 00:11:48.140 "data_offset": 2048, 00:11:48.140 "data_size": 63488 00:11:48.140 }, 00:11:48.140 { 00:11:48.140 "name": "BaseBdev2", 00:11:48.140 "uuid": "aa790a60-e33f-47f7-afaf-8fc074ffc974", 00:11:48.140 "is_configured": true, 00:11:48.140 "data_offset": 2048, 00:11:48.140 "data_size": 63488 00:11:48.140 } 00:11:48.140 ] 00:11:48.140 } 00:11:48.140 } 00:11:48.140 }' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:48.140 BaseBdev2' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.140 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 [2024-11-27 14:11:18.664705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.399 "name": "Existed_Raid", 00:11:48.399 "uuid": "c1e537fb-1b50-4c99-b9a2-339ae8327930", 00:11:48.399 "strip_size_kb": 0, 00:11:48.399 "state": "online", 00:11:48.399 "raid_level": "raid1", 00:11:48.399 "superblock": true, 00:11:48.399 "num_base_bdevs": 2, 00:11:48.399 "num_base_bdevs_discovered": 1, 00:11:48.399 "num_base_bdevs_operational": 1, 00:11:48.399 "base_bdevs_list": [ 00:11:48.399 { 00:11:48.399 "name": null, 00:11:48.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.399 "is_configured": false, 00:11:48.399 "data_offset": 0, 00:11:48.399 "data_size": 63488 00:11:48.399 }, 00:11:48.399 { 00:11:48.399 "name": "BaseBdev2", 00:11:48.399 "uuid": "aa790a60-e33f-47f7-afaf-8fc074ffc974", 00:11:48.399 "is_configured": true, 00:11:48.399 "data_offset": 2048, 00:11:48.399 "data_size": 63488 00:11:48.399 } 00:11:48.399 ] 00:11:48.399 }' 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.399 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.024 [2024-11-27 14:11:19.306633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.024 [2024-11-27 14:11:19.306765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.024 [2024-11-27 14:11:19.393310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.024 [2024-11-27 14:11:19.393382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.024 [2024-11-27 14:11:19.393404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63081 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63081 ']' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63081 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63081 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.024 killing process with pid 63081 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63081' 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63081 00:11:49.024 [2024-11-27 14:11:19.493575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.024 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63081 00:11:49.024 [2024-11-27 14:11:19.508449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.400 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:50.400 00:11:50.400 real 0m5.514s 00:11:50.400 user 0m8.308s 00:11:50.400 sys 0m0.778s 00:11:50.400 ************************************ 00:11:50.400 END TEST raid_state_function_test_sb 00:11:50.400 ************************************ 00:11:50.400 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.400 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.400 14:11:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:50.400 14:11:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:50.401 14:11:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.401 14:11:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.401 ************************************ 00:11:50.401 START TEST raid_superblock_test 00:11:50.401 ************************************ 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63334 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63334 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63334 ']' 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.401 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.401 [2024-11-27 14:11:20.741884] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:50.401 [2024-11-27 14:11:20.742100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63334 ] 00:11:50.660 [2024-11-27 14:11:20.934280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.660 [2024-11-27 14:11:21.094405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.919 [2024-11-27 14:11:21.360932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.919 [2024-11-27 14:11:21.361055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 malloc1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 [2024-11-27 14:11:21.845623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:51.486 [2024-11-27 14:11:21.845698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.486 [2024-11-27 14:11:21.845731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.486 [2024-11-27 14:11:21.845747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.486 [2024-11-27 14:11:21.848539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.486 [2024-11-27 14:11:21.848752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:51.486 pt1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 malloc2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 [2024-11-27 14:11:21.902786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.486 [2024-11-27 14:11:21.902868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.486 [2024-11-27 14:11:21.902918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.486 [2024-11-27 14:11:21.902934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.486 [2024-11-27 14:11:21.905689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.486 [2024-11-27 14:11:21.905736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.486 pt2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 [2024-11-27 14:11:21.914923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:51.486 [2024-11-27 14:11:21.917457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.486 [2024-11-27 14:11:21.917687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:51.486 [2024-11-27 14:11:21.917711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.486 [2024-11-27 14:11:21.918059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:51.486 [2024-11-27 14:11:21.918277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:51.486 [2024-11-27 14:11:21.918312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:51.486 [2024-11-27 14:11:21.918508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.486 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.487 "name": "raid_bdev1", 00:11:51.487 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:51.487 "strip_size_kb": 0, 00:11:51.487 "state": "online", 00:11:51.487 "raid_level": "raid1", 00:11:51.487 "superblock": true, 00:11:51.487 "num_base_bdevs": 2, 00:11:51.487 "num_base_bdevs_discovered": 2, 00:11:51.487 "num_base_bdevs_operational": 2, 00:11:51.487 "base_bdevs_list": [ 00:11:51.487 { 00:11:51.487 "name": "pt1", 00:11:51.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.487 "is_configured": true, 00:11:51.487 "data_offset": 2048, 00:11:51.487 "data_size": 63488 00:11:51.487 }, 00:11:51.487 { 00:11:51.487 "name": "pt2", 00:11:51.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.487 "is_configured": true, 00:11:51.487 "data_offset": 2048, 00:11:51.487 "data_size": 63488 00:11:51.487 } 00:11:51.487 ] 00:11:51.487 }' 00:11:51.487 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.487 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.053 [2024-11-27 14:11:22.467363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.053 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.053 "name": "raid_bdev1", 00:11:52.053 "aliases": [ 00:11:52.053 "fbdba2e3-d4bf-4dee-b79b-82808f8484a9" 00:11:52.053 ], 00:11:52.053 "product_name": "Raid Volume", 00:11:52.053 "block_size": 512, 00:11:52.053 "num_blocks": 63488, 00:11:52.053 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:52.053 "assigned_rate_limits": { 00:11:52.053 "rw_ios_per_sec": 0, 00:11:52.053 "rw_mbytes_per_sec": 0, 00:11:52.053 "r_mbytes_per_sec": 0, 00:11:52.053 "w_mbytes_per_sec": 0 00:11:52.053 }, 00:11:52.053 "claimed": false, 00:11:52.053 "zoned": false, 00:11:52.053 "supported_io_types": { 00:11:52.053 "read": true, 00:11:52.053 "write": true, 00:11:52.053 "unmap": false, 00:11:52.053 "flush": false, 00:11:52.053 "reset": true, 00:11:52.053 "nvme_admin": false, 00:11:52.053 "nvme_io": false, 00:11:52.053 "nvme_io_md": false, 00:11:52.053 "write_zeroes": true, 00:11:52.053 "zcopy": false, 00:11:52.053 "get_zone_info": false, 00:11:52.053 "zone_management": false, 00:11:52.053 "zone_append": false, 00:11:52.053 "compare": false, 00:11:52.053 "compare_and_write": false, 00:11:52.053 "abort": false, 00:11:52.053 "seek_hole": false, 00:11:52.053 "seek_data": false, 00:11:52.053 "copy": false, 00:11:52.053 "nvme_iov_md": false 00:11:52.053 }, 00:11:52.053 "memory_domains": [ 00:11:52.053 { 00:11:52.053 "dma_device_id": "system", 00:11:52.053 "dma_device_type": 1 00:11:52.053 }, 00:11:52.053 { 00:11:52.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.053 "dma_device_type": 2 00:11:52.053 }, 00:11:52.053 { 00:11:52.053 "dma_device_id": "system", 00:11:52.053 "dma_device_type": 1 00:11:52.053 }, 00:11:52.053 { 00:11:52.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.053 "dma_device_type": 2 00:11:52.053 } 00:11:52.053 ], 00:11:52.053 "driver_specific": { 00:11:52.053 "raid": { 00:11:52.053 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:52.053 "strip_size_kb": 0, 00:11:52.053 "state": "online", 00:11:52.053 "raid_level": "raid1", 00:11:52.053 "superblock": true, 00:11:52.053 "num_base_bdevs": 2, 00:11:52.053 "num_base_bdevs_discovered": 2, 00:11:52.053 "num_base_bdevs_operational": 2, 00:11:52.053 "base_bdevs_list": [ 00:11:52.053 { 00:11:52.053 "name": "pt1", 00:11:52.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.053 "is_configured": true, 00:11:52.053 "data_offset": 2048, 00:11:52.053 "data_size": 63488 00:11:52.054 }, 00:11:52.054 { 00:11:52.054 "name": "pt2", 00:11:52.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.054 "is_configured": true, 00:11:52.054 "data_offset": 2048, 00:11:52.054 "data_size": 63488 00:11:52.054 } 00:11:52.054 ] 00:11:52.054 } 00:11:52.054 } 00:11:52.054 }' 00:11:52.054 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:52.312 pt2' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.312 [2024-11-27 14:11:22.739408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fbdba2e3-d4bf-4dee-b79b-82808f8484a9 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fbdba2e3-d4bf-4dee-b79b-82808f8484a9 ']' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.312 [2024-11-27 14:11:22.787044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.312 [2024-11-27 14:11:22.787077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.312 [2024-11-27 14:11:22.787186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.312 [2024-11-27 14:11:22.787267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.312 [2024-11-27 14:11:22.787292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.312 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 [2024-11-27 14:11:22.939166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:52.571 [2024-11-27 14:11:22.941762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:52.571 [2024-11-27 14:11:22.942041] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:52.571 [2024-11-27 14:11:22.942150] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:52.571 [2024-11-27 14:11:22.942189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.571 [2024-11-27 14:11:22.942208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:52.571 request: 00:11:52.571 { 00:11:52.571 "name": "raid_bdev1", 00:11:52.571 "raid_level": "raid1", 00:11:52.571 "base_bdevs": [ 00:11:52.571 "malloc1", 00:11:52.571 "malloc2" 00:11:52.571 ], 00:11:52.571 "superblock": false, 00:11:52.571 "method": "bdev_raid_create", 00:11:52.571 "req_id": 1 00:11:52.571 } 00:11:52.571 Got JSON-RPC error response 00:11:52.571 response: 00:11:52.571 { 00:11:52.571 "code": -17, 00:11:52.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:52.571 } 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 [2024-11-27 14:11:23.007175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:52.571 [2024-11-27 14:11:23.007268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.571 [2024-11-27 14:11:23.007301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:52.571 [2024-11-27 14:11:23.007318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.571 [2024-11-27 14:11:23.010287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.571 [2024-11-27 14:11:23.010338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:52.571 [2024-11-27 14:11:23.010461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:52.571 [2024-11-27 14:11:23.010541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:52.571 pt1 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.571 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.571 "name": "raid_bdev1", 00:11:52.571 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:52.571 "strip_size_kb": 0, 00:11:52.571 "state": "configuring", 00:11:52.571 "raid_level": "raid1", 00:11:52.571 "superblock": true, 00:11:52.571 "num_base_bdevs": 2, 00:11:52.571 "num_base_bdevs_discovered": 1, 00:11:52.571 "num_base_bdevs_operational": 2, 00:11:52.571 "base_bdevs_list": [ 00:11:52.571 { 00:11:52.571 "name": "pt1", 00:11:52.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.571 "is_configured": true, 00:11:52.571 "data_offset": 2048, 00:11:52.571 "data_size": 63488 00:11:52.571 }, 00:11:52.571 { 00:11:52.571 "name": null, 00:11:52.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.572 "is_configured": false, 00:11:52.572 "data_offset": 2048, 00:11:52.572 "data_size": 63488 00:11:52.572 } 00:11:52.572 ] 00:11:52.572 }' 00:11:52.572 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.572 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.139 [2024-11-27 14:11:23.531288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:53.139 [2024-11-27 14:11:23.531387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.139 [2024-11-27 14:11:23.531421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:53.139 [2024-11-27 14:11:23.531439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.139 [2024-11-27 14:11:23.532053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.139 [2024-11-27 14:11:23.532093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:53.139 [2024-11-27 14:11:23.532198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:53.139 [2024-11-27 14:11:23.532240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:53.139 [2024-11-27 14:11:23.532385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.139 [2024-11-27 14:11:23.532407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.139 [2024-11-27 14:11:23.532721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:53.139 [2024-11-27 14:11:23.532931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.139 [2024-11-27 14:11:23.532947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:53.139 [2024-11-27 14:11:23.533120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.139 pt2 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.139 "name": "raid_bdev1", 00:11:53.139 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:53.139 "strip_size_kb": 0, 00:11:53.139 "state": "online", 00:11:53.139 "raid_level": "raid1", 00:11:53.139 "superblock": true, 00:11:53.139 "num_base_bdevs": 2, 00:11:53.139 "num_base_bdevs_discovered": 2, 00:11:53.139 "num_base_bdevs_operational": 2, 00:11:53.139 "base_bdevs_list": [ 00:11:53.139 { 00:11:53.139 "name": "pt1", 00:11:53.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.139 "is_configured": true, 00:11:53.139 "data_offset": 2048, 00:11:53.139 "data_size": 63488 00:11:53.139 }, 00:11:53.139 { 00:11:53.139 "name": "pt2", 00:11:53.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.139 "is_configured": true, 00:11:53.139 "data_offset": 2048, 00:11:53.139 "data_size": 63488 00:11:53.139 } 00:11:53.139 ] 00:11:53.139 }' 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.139 14:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.705 [2024-11-27 14:11:24.035740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.705 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.705 "name": "raid_bdev1", 00:11:53.705 "aliases": [ 00:11:53.705 "fbdba2e3-d4bf-4dee-b79b-82808f8484a9" 00:11:53.705 ], 00:11:53.705 "product_name": "Raid Volume", 00:11:53.705 "block_size": 512, 00:11:53.705 "num_blocks": 63488, 00:11:53.705 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:53.705 "assigned_rate_limits": { 00:11:53.705 "rw_ios_per_sec": 0, 00:11:53.705 "rw_mbytes_per_sec": 0, 00:11:53.705 "r_mbytes_per_sec": 0, 00:11:53.705 "w_mbytes_per_sec": 0 00:11:53.705 }, 00:11:53.705 "claimed": false, 00:11:53.705 "zoned": false, 00:11:53.705 "supported_io_types": { 00:11:53.705 "read": true, 00:11:53.705 "write": true, 00:11:53.705 "unmap": false, 00:11:53.705 "flush": false, 00:11:53.705 "reset": true, 00:11:53.705 "nvme_admin": false, 00:11:53.705 "nvme_io": false, 00:11:53.705 "nvme_io_md": false, 00:11:53.705 "write_zeroes": true, 00:11:53.705 "zcopy": false, 00:11:53.705 "get_zone_info": false, 00:11:53.705 "zone_management": false, 00:11:53.705 "zone_append": false, 00:11:53.705 "compare": false, 00:11:53.705 "compare_and_write": false, 00:11:53.705 "abort": false, 00:11:53.705 "seek_hole": false, 00:11:53.705 "seek_data": false, 00:11:53.705 "copy": false, 00:11:53.705 "nvme_iov_md": false 00:11:53.705 }, 00:11:53.705 "memory_domains": [ 00:11:53.705 { 00:11:53.705 "dma_device_id": "system", 00:11:53.705 "dma_device_type": 1 00:11:53.705 }, 00:11:53.706 { 00:11:53.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.706 "dma_device_type": 2 00:11:53.706 }, 00:11:53.706 { 00:11:53.706 "dma_device_id": "system", 00:11:53.706 "dma_device_type": 1 00:11:53.706 }, 00:11:53.706 { 00:11:53.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.706 "dma_device_type": 2 00:11:53.706 } 00:11:53.706 ], 00:11:53.706 "driver_specific": { 00:11:53.706 "raid": { 00:11:53.706 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:53.706 "strip_size_kb": 0, 00:11:53.706 "state": "online", 00:11:53.706 "raid_level": "raid1", 00:11:53.706 "superblock": true, 00:11:53.706 "num_base_bdevs": 2, 00:11:53.706 "num_base_bdevs_discovered": 2, 00:11:53.706 "num_base_bdevs_operational": 2, 00:11:53.706 "base_bdevs_list": [ 00:11:53.706 { 00:11:53.706 "name": "pt1", 00:11:53.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.706 "is_configured": true, 00:11:53.706 "data_offset": 2048, 00:11:53.706 "data_size": 63488 00:11:53.706 }, 00:11:53.706 { 00:11:53.706 "name": "pt2", 00:11:53.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.706 "is_configured": true, 00:11:53.706 "data_offset": 2048, 00:11:53.706 "data_size": 63488 00:11:53.706 } 00:11:53.706 ] 00:11:53.706 } 00:11:53.706 } 00:11:53.706 }' 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:53.706 pt2' 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.706 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.965 [2024-11-27 14:11:24.291731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fbdba2e3-d4bf-4dee-b79b-82808f8484a9 '!=' fbdba2e3-d4bf-4dee-b79b-82808f8484a9 ']' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.965 [2024-11-27 14:11:24.347487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.965 "name": "raid_bdev1", 00:11:53.965 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:53.965 "strip_size_kb": 0, 00:11:53.965 "state": "online", 00:11:53.965 "raid_level": "raid1", 00:11:53.965 "superblock": true, 00:11:53.965 "num_base_bdevs": 2, 00:11:53.965 "num_base_bdevs_discovered": 1, 00:11:53.965 "num_base_bdevs_operational": 1, 00:11:53.965 "base_bdevs_list": [ 00:11:53.965 { 00:11:53.965 "name": null, 00:11:53.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.965 "is_configured": false, 00:11:53.965 "data_offset": 0, 00:11:53.965 "data_size": 63488 00:11:53.965 }, 00:11:53.965 { 00:11:53.965 "name": "pt2", 00:11:53.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.965 "is_configured": true, 00:11:53.965 "data_offset": 2048, 00:11:53.965 "data_size": 63488 00:11:53.965 } 00:11:53.965 ] 00:11:53.965 }' 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.965 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 [2024-11-27 14:11:24.867642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.531 [2024-11-27 14:11:24.867677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.531 [2024-11-27 14:11:24.867794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.531 [2024-11-27 14:11:24.867920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.531 [2024-11-27 14:11:24.867942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.531 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.532 [2024-11-27 14:11:24.939597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.532 [2024-11-27 14:11:24.939669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.532 [2024-11-27 14:11:24.939694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:54.532 [2024-11-27 14:11:24.939712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.532 [2024-11-27 14:11:24.942724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.532 [2024-11-27 14:11:24.942777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.532 [2024-11-27 14:11:24.942893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:54.532 [2024-11-27 14:11:24.942966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.532 [2024-11-27 14:11:24.943095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:54.532 [2024-11-27 14:11:24.943119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.532 [2024-11-27 14:11:24.943418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:54.532 [2024-11-27 14:11:24.943616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:54.532 [2024-11-27 14:11:24.943632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:54.532 [2024-11-27 14:11:24.943874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.532 pt2 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.532 "name": "raid_bdev1", 00:11:54.532 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:54.532 "strip_size_kb": 0, 00:11:54.532 "state": "online", 00:11:54.532 "raid_level": "raid1", 00:11:54.532 "superblock": true, 00:11:54.532 "num_base_bdevs": 2, 00:11:54.532 "num_base_bdevs_discovered": 1, 00:11:54.532 "num_base_bdevs_operational": 1, 00:11:54.532 "base_bdevs_list": [ 00:11:54.532 { 00:11:54.532 "name": null, 00:11:54.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.532 "is_configured": false, 00:11:54.532 "data_offset": 2048, 00:11:54.532 "data_size": 63488 00:11:54.532 }, 00:11:54.532 { 00:11:54.532 "name": "pt2", 00:11:54.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.532 "is_configured": true, 00:11:54.532 "data_offset": 2048, 00:11:54.532 "data_size": 63488 00:11:54.532 } 00:11:54.532 ] 00:11:54.532 }' 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.532 14:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.101 [2024-11-27 14:11:25.463917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.101 [2024-11-27 14:11:25.463955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.101 [2024-11-27 14:11:25.464048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.101 [2024-11-27 14:11:25.464121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.101 [2024-11-27 14:11:25.464138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.101 [2024-11-27 14:11:25.535949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:55.101 [2024-11-27 14:11:25.536033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.101 [2024-11-27 14:11:25.536065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:55.101 [2024-11-27 14:11:25.536079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.101 [2024-11-27 14:11:25.539015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.101 [2024-11-27 14:11:25.539061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:55.101 [2024-11-27 14:11:25.539171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:55.101 [2024-11-27 14:11:25.539230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:55.101 [2024-11-27 14:11:25.539406] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:55.101 [2024-11-27 14:11:25.539425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.101 [2024-11-27 14:11:25.539448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:55.101 [2024-11-27 14:11:25.539512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.101 [2024-11-27 14:11:25.539618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:55.101 [2024-11-27 14:11:25.539634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.101 [2024-11-27 14:11:25.539964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:55.101 [2024-11-27 14:11:25.540156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:55.101 [2024-11-27 14:11:25.540178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:55.101 [2024-11-27 14:11:25.540412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.101 pt1 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.101 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.102 "name": "raid_bdev1", 00:11:55.102 "uuid": "fbdba2e3-d4bf-4dee-b79b-82808f8484a9", 00:11:55.102 "strip_size_kb": 0, 00:11:55.102 "state": "online", 00:11:55.102 "raid_level": "raid1", 00:11:55.102 "superblock": true, 00:11:55.102 "num_base_bdevs": 2, 00:11:55.102 "num_base_bdevs_discovered": 1, 00:11:55.102 "num_base_bdevs_operational": 1, 00:11:55.102 "base_bdevs_list": [ 00:11:55.102 { 00:11:55.102 "name": null, 00:11:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.102 "is_configured": false, 00:11:55.102 "data_offset": 2048, 00:11:55.102 "data_size": 63488 00:11:55.102 }, 00:11:55.102 { 00:11:55.102 "name": "pt2", 00:11:55.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.102 "is_configured": true, 00:11:55.102 "data_offset": 2048, 00:11:55.102 "data_size": 63488 00:11:55.102 } 00:11:55.102 ] 00:11:55.102 }' 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.102 14:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:55.668 [2024-11-27 14:11:26.112768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fbdba2e3-d4bf-4dee-b79b-82808f8484a9 '!=' fbdba2e3-d4bf-4dee-b79b-82808f8484a9 ']' 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63334 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63334 ']' 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63334 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.668 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63334 00:11:55.926 killing process with pid 63334 00:11:55.926 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.926 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.926 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63334' 00:11:55.926 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63334 00:11:55.926 [2024-11-27 14:11:26.190297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.926 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63334 00:11:55.926 [2024-11-27 14:11:26.190407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.926 [2024-11-27 14:11:26.190475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.926 [2024-11-27 14:11:26.190499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:55.926 [2024-11-27 14:11:26.377123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.332 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:57.332 00:11:57.332 real 0m6.850s 00:11:57.332 user 0m10.867s 00:11:57.332 sys 0m0.957s 00:11:57.332 ************************************ 00:11:57.332 END TEST raid_superblock_test 00:11:57.332 ************************************ 00:11:57.332 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.332 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.332 14:11:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:57.332 14:11:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:57.332 14:11:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.332 14:11:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.332 ************************************ 00:11:57.332 START TEST raid_read_error_test 00:11:57.332 ************************************ 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:57.332 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qaihmte2yq 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63670 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63670 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63670 ']' 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.333 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.333 [2024-11-27 14:11:27.683041] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:11:57.333 [2024-11-27 14:11:27.683221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63670 ] 00:11:57.592 [2024-11-27 14:11:27.865836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.592 [2024-11-27 14:11:28.001319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.851 [2024-11-27 14:11:28.226984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.851 [2024-11-27 14:11:28.227043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.418 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.418 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:58.418 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.418 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.418 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.418 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.418 BaseBdev1_malloc 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 true 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 [2024-11-27 14:11:28.705298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:58.419 [2024-11-27 14:11:28.705368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.419 [2024-11-27 14:11:28.705399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:58.419 [2024-11-27 14:11:28.705419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.419 [2024-11-27 14:11:28.708280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.419 [2024-11-27 14:11:28.708333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.419 BaseBdev1 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 BaseBdev2_malloc 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 true 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 [2024-11-27 14:11:28.766104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:58.419 [2024-11-27 14:11:28.766319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.419 [2024-11-27 14:11:28.766367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:58.419 [2024-11-27 14:11:28.766388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.419 [2024-11-27 14:11:28.769241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.419 [2024-11-27 14:11:28.769305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.419 BaseBdev2 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 [2024-11-27 14:11:28.774285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.419 [2024-11-27 14:11:28.776711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.419 [2024-11-27 14:11:28.777155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.419 [2024-11-27 14:11:28.777186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.419 [2024-11-27 14:11:28.777485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:58.419 [2024-11-27 14:11:28.777714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.419 [2024-11-27 14:11:28.777731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:58.419 [2024-11-27 14:11:28.777960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.419 "name": "raid_bdev1", 00:11:58.419 "uuid": "e09f13f8-4028-424c-83f9-d9ab893ca984", 00:11:58.419 "strip_size_kb": 0, 00:11:58.419 "state": "online", 00:11:58.419 "raid_level": "raid1", 00:11:58.419 "superblock": true, 00:11:58.419 "num_base_bdevs": 2, 00:11:58.419 "num_base_bdevs_discovered": 2, 00:11:58.419 "num_base_bdevs_operational": 2, 00:11:58.419 "base_bdevs_list": [ 00:11:58.419 { 00:11:58.419 "name": "BaseBdev1", 00:11:58.419 "uuid": "8d4b0ab6-3b67-524a-9177-3f93d283db03", 00:11:58.419 "is_configured": true, 00:11:58.419 "data_offset": 2048, 00:11:58.419 "data_size": 63488 00:11:58.419 }, 00:11:58.419 { 00:11:58.419 "name": "BaseBdev2", 00:11:58.419 "uuid": "b3dce831-e34b-555c-a1f4-d2b260980c0e", 00:11:58.419 "is_configured": true, 00:11:58.419 "data_offset": 2048, 00:11:58.419 "data_size": 63488 00:11:58.419 } 00:11:58.419 ] 00:11:58.419 }' 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.419 14:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.986 14:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.986 14:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.986 [2024-11-27 14:11:29.435883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.920 "name": "raid_bdev1", 00:11:59.920 "uuid": "e09f13f8-4028-424c-83f9-d9ab893ca984", 00:11:59.920 "strip_size_kb": 0, 00:11:59.920 "state": "online", 00:11:59.920 "raid_level": "raid1", 00:11:59.920 "superblock": true, 00:11:59.920 "num_base_bdevs": 2, 00:11:59.920 "num_base_bdevs_discovered": 2, 00:11:59.920 "num_base_bdevs_operational": 2, 00:11:59.920 "base_bdevs_list": [ 00:11:59.920 { 00:11:59.920 "name": "BaseBdev1", 00:11:59.920 "uuid": "8d4b0ab6-3b67-524a-9177-3f93d283db03", 00:11:59.920 "is_configured": true, 00:11:59.920 "data_offset": 2048, 00:11:59.920 "data_size": 63488 00:11:59.920 }, 00:11:59.920 { 00:11:59.920 "name": "BaseBdev2", 00:11:59.920 "uuid": "b3dce831-e34b-555c-a1f4-d2b260980c0e", 00:11:59.920 "is_configured": true, 00:11:59.920 "data_offset": 2048, 00:11:59.920 "data_size": 63488 00:11:59.920 } 00:11:59.920 ] 00:11:59.920 }' 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.920 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.521 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.521 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.521 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.521 [2024-11-27 14:11:30.931770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.521 [2024-11-27 14:11:30.931836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.521 [2024-11-27 14:11:30.935395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.521 { 00:12:00.521 "results": [ 00:12:00.521 { 00:12:00.521 "job": "raid_bdev1", 00:12:00.521 "core_mask": "0x1", 00:12:00.521 "workload": "randrw", 00:12:00.521 "percentage": 50, 00:12:00.521 "status": "finished", 00:12:00.521 "queue_depth": 1, 00:12:00.521 "io_size": 131072, 00:12:00.521 "runtime": 1.493521, 00:12:00.521 "iops": 11727.320874631157, 00:12:00.521 "mibps": 1465.9151093288947, 00:12:00.521 "io_failed": 0, 00:12:00.521 "io_timeout": 0, 00:12:00.521 "avg_latency_us": 80.89126971686606, 00:12:00.521 "min_latency_us": 40.49454545454545, 00:12:00.521 "max_latency_us": 1899.0545454545454 00:12:00.521 } 00:12:00.521 ], 00:12:00.521 "core_count": 1 00:12:00.521 } 00:12:00.521 [2024-11-27 14:11:30.935649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.521 [2024-11-27 14:11:30.935865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.521 [2024-11-27 14:11:30.935892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:00.521 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.521 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63670 00:12:00.521 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63670 ']' 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63670 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63670 00:12:00.522 killing process with pid 63670 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63670' 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63670 00:12:00.522 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63670 00:12:00.522 [2024-11-27 14:11:30.971553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.780 [2024-11-27 14:11:31.122086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qaihmte2yq 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:02.151 00:12:02.151 real 0m4.969s 00:12:02.151 user 0m6.203s 00:12:02.151 sys 0m0.569s 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.151 14:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.151 ************************************ 00:12:02.151 END TEST raid_read_error_test 00:12:02.151 ************************************ 00:12:02.151 14:11:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:02.151 14:11:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.151 14:11:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.151 14:11:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.151 ************************************ 00:12:02.151 START TEST raid_write_error_test 00:12:02.151 ************************************ 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:02.151 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.s0Nr5avMTk 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63821 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63821 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63821 ']' 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.152 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.152 [2024-11-27 14:11:32.657875] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:02.152 [2024-11-27 14:11:32.658063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63821 ] 00:12:02.409 [2024-11-27 14:11:32.839055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.666 [2024-11-27 14:11:33.018718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.923 [2024-11-27 14:11:33.268453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.923 [2024-11-27 14:11:33.268556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 BaseBdev1_malloc 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 true 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 [2024-11-27 14:11:33.784066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:03.490 [2024-11-27 14:11:33.784151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.490 [2024-11-27 14:11:33.784187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:03.490 [2024-11-27 14:11:33.784207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.490 [2024-11-27 14:11:33.787533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.490 [2024-11-27 14:11:33.787588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.490 BaseBdev1 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 BaseBdev2_malloc 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 true 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 [2024-11-27 14:11:33.853693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:03.490 [2024-11-27 14:11:33.854038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.490 [2024-11-27 14:11:33.854094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:03.490 [2024-11-27 14:11:33.854114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.490 [2024-11-27 14:11:33.857292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.490 [2024-11-27 14:11:33.857470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.490 BaseBdev2 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.490 [2024-11-27 14:11:33.861816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.490 [2024-11-27 14:11:33.864398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.490 [2024-11-27 14:11:33.864883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.490 [2024-11-27 14:11:33.864915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.490 [2024-11-27 14:11:33.865278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:03.490 [2024-11-27 14:11:33.865561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.490 [2024-11-27 14:11:33.865579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:03.490 [2024-11-27 14:11:33.865927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.490 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.491 "name": "raid_bdev1", 00:12:03.491 "uuid": "c441a540-3dab-426a-88e7-9c995e296019", 00:12:03.491 "strip_size_kb": 0, 00:12:03.491 "state": "online", 00:12:03.491 "raid_level": "raid1", 00:12:03.491 "superblock": true, 00:12:03.491 "num_base_bdevs": 2, 00:12:03.491 "num_base_bdevs_discovered": 2, 00:12:03.491 "num_base_bdevs_operational": 2, 00:12:03.491 "base_bdevs_list": [ 00:12:03.491 { 00:12:03.491 "name": "BaseBdev1", 00:12:03.491 "uuid": "6ab1e980-8324-51c3-bcb8-1258093663d7", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 2048, 00:12:03.491 "data_size": 63488 00:12:03.491 }, 00:12:03.491 { 00:12:03.491 "name": "BaseBdev2", 00:12:03.491 "uuid": "0c151f21-287c-5e97-9b2c-2eb869237190", 00:12:03.491 "is_configured": true, 00:12:03.491 "data_offset": 2048, 00:12:03.491 "data_size": 63488 00:12:03.491 } 00:12:03.491 ] 00:12:03.491 }' 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.491 14:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.058 14:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:04.058 14:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:04.058 [2024-11-27 14:11:34.439526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.029 [2024-11-27 14:11:35.321240] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:05.029 [2024-11-27 14:11:35.321452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.029 [2024-11-27 14:11:35.321723] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.029 "name": "raid_bdev1", 00:12:05.029 "uuid": "c441a540-3dab-426a-88e7-9c995e296019", 00:12:05.029 "strip_size_kb": 0, 00:12:05.029 "state": "online", 00:12:05.029 "raid_level": "raid1", 00:12:05.029 "superblock": true, 00:12:05.029 "num_base_bdevs": 2, 00:12:05.029 "num_base_bdevs_discovered": 1, 00:12:05.029 "num_base_bdevs_operational": 1, 00:12:05.029 "base_bdevs_list": [ 00:12:05.029 { 00:12:05.029 "name": null, 00:12:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.029 "is_configured": false, 00:12:05.029 "data_offset": 0, 00:12:05.029 "data_size": 63488 00:12:05.029 }, 00:12:05.029 { 00:12:05.029 "name": "BaseBdev2", 00:12:05.029 "uuid": "0c151f21-287c-5e97-9b2c-2eb869237190", 00:12:05.029 "is_configured": true, 00:12:05.029 "data_offset": 2048, 00:12:05.029 "data_size": 63488 00:12:05.029 } 00:12:05.029 ] 00:12:05.029 }' 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.029 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.596 [2024-11-27 14:11:35.876972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.596 [2024-11-27 14:11:35.877010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.596 [2024-11-27 14:11:35.880339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.596 [2024-11-27 14:11:35.880535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.596 [2024-11-27 14:11:35.880639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.596 [2024-11-27 14:11:35.880660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:05.596 { 00:12:05.596 "results": [ 00:12:05.596 { 00:12:05.596 "job": "raid_bdev1", 00:12:05.596 "core_mask": "0x1", 00:12:05.596 "workload": "randrw", 00:12:05.596 "percentage": 50, 00:12:05.596 "status": "finished", 00:12:05.596 "queue_depth": 1, 00:12:05.596 "io_size": 131072, 00:12:05.596 "runtime": 1.434709, 00:12:05.596 "iops": 13909.440869193682, 00:12:05.596 "mibps": 1738.6801086492103, 00:12:05.596 "io_failed": 0, 00:12:05.596 "io_timeout": 0, 00:12:05.596 "avg_latency_us": 67.34373439749267, 00:12:05.596 "min_latency_us": 40.49454545454545, 00:12:05.596 "max_latency_us": 1899.0545454545454 00:12:05.596 } 00:12:05.596 ], 00:12:05.596 "core_count": 1 00:12:05.596 } 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63821 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63821 ']' 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63821 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63821 00:12:05.596 killing process with pid 63821 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63821' 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63821 00:12:05.596 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63821 00:12:05.596 [2024-11-27 14:11:35.916032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.596 [2024-11-27 14:11:36.040753] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.s0Nr5avMTk 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:06.973 ************************************ 00:12:06.973 END TEST raid_write_error_test 00:12:06.973 ************************************ 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:06.973 00:12:06.973 real 0m4.645s 00:12:06.973 user 0m5.826s 00:12:06.973 sys 0m0.560s 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.973 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.973 14:11:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:06.973 14:11:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:06.973 14:11:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:06.973 14:11:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:06.973 14:11:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.973 14:11:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.973 ************************************ 00:12:06.973 START TEST raid_state_function_test 00:12:06.973 ************************************ 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63959 00:12:06.973 Process raid pid: 63959 00:12:06.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63959' 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63959 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63959 ']' 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.973 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.973 [2024-11-27 14:11:37.364881] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:06.973 [2024-11-27 14:11:37.365051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.231 [2024-11-27 14:11:37.553165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.231 [2024-11-27 14:11:37.712173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.489 [2024-11-27 14:11:37.946969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.489 [2024-11-27 14:11:37.947025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.056 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.056 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:08.056 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.056 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.056 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.057 [2024-11-27 14:11:38.406813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.057 [2024-11-27 14:11:38.406898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.057 [2024-11-27 14:11:38.406917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.057 [2024-11-27 14:11:38.406935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.057 [2024-11-27 14:11:38.406946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.057 [2024-11-27 14:11:38.406962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.057 "name": "Existed_Raid", 00:12:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.057 "strip_size_kb": 64, 00:12:08.057 "state": "configuring", 00:12:08.057 "raid_level": "raid0", 00:12:08.057 "superblock": false, 00:12:08.057 "num_base_bdevs": 3, 00:12:08.057 "num_base_bdevs_discovered": 0, 00:12:08.057 "num_base_bdevs_operational": 3, 00:12:08.057 "base_bdevs_list": [ 00:12:08.057 { 00:12:08.057 "name": "BaseBdev1", 00:12:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.057 "is_configured": false, 00:12:08.057 "data_offset": 0, 00:12:08.057 "data_size": 0 00:12:08.057 }, 00:12:08.057 { 00:12:08.057 "name": "BaseBdev2", 00:12:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.057 "is_configured": false, 00:12:08.057 "data_offset": 0, 00:12:08.057 "data_size": 0 00:12:08.057 }, 00:12:08.057 { 00:12:08.057 "name": "BaseBdev3", 00:12:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.057 "is_configured": false, 00:12:08.057 "data_offset": 0, 00:12:08.057 "data_size": 0 00:12:08.057 } 00:12:08.057 ] 00:12:08.057 }' 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.057 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.624 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.624 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 [2024-11-27 14:11:38.914928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.624 [2024-11-27 14:11:38.914974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:08.624 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.625 [2024-11-27 14:11:38.922925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.625 [2024-11-27 14:11:38.922994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.625 [2024-11-27 14:11:38.923011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.625 [2024-11-27 14:11:38.923028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.625 [2024-11-27 14:11:38.923038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.625 [2024-11-27 14:11:38.923054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.625 [2024-11-27 14:11:38.967884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.625 BaseBdev1 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.625 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.625 [ 00:12:08.625 { 00:12:08.625 "name": "BaseBdev1", 00:12:08.625 "aliases": [ 00:12:08.625 "ba73cc3c-02d2-4e43-a86a-c0be1a395f33" 00:12:08.625 ], 00:12:08.625 "product_name": "Malloc disk", 00:12:08.625 "block_size": 512, 00:12:08.625 "num_blocks": 65536, 00:12:08.625 "uuid": "ba73cc3c-02d2-4e43-a86a-c0be1a395f33", 00:12:08.625 "assigned_rate_limits": { 00:12:08.625 "rw_ios_per_sec": 0, 00:12:08.625 "rw_mbytes_per_sec": 0, 00:12:08.625 "r_mbytes_per_sec": 0, 00:12:08.625 "w_mbytes_per_sec": 0 00:12:08.625 }, 00:12:08.625 "claimed": true, 00:12:08.625 "claim_type": "exclusive_write", 00:12:08.625 "zoned": false, 00:12:08.625 "supported_io_types": { 00:12:08.625 "read": true, 00:12:08.625 "write": true, 00:12:08.625 "unmap": true, 00:12:08.625 "flush": true, 00:12:08.625 "reset": true, 00:12:08.625 "nvme_admin": false, 00:12:08.625 "nvme_io": false, 00:12:08.625 "nvme_io_md": false, 00:12:08.625 "write_zeroes": true, 00:12:08.625 "zcopy": true, 00:12:08.625 "get_zone_info": false, 00:12:08.625 "zone_management": false, 00:12:08.625 "zone_append": false, 00:12:08.625 "compare": false, 00:12:08.625 "compare_and_write": false, 00:12:08.625 "abort": true, 00:12:08.625 "seek_hole": false, 00:12:08.625 "seek_data": false, 00:12:08.625 "copy": true, 00:12:08.625 "nvme_iov_md": false 00:12:08.625 }, 00:12:08.625 "memory_domains": [ 00:12:08.625 { 00:12:08.625 "dma_device_id": "system", 00:12:08.625 "dma_device_type": 1 00:12:08.625 }, 00:12:08.625 { 00:12:08.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.625 "dma_device_type": 2 00:12:08.625 } 00:12:08.625 ], 00:12:08.625 "driver_specific": {} 00:12:08.625 } 00:12:08.625 ] 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.625 "name": "Existed_Raid", 00:12:08.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.625 "strip_size_kb": 64, 00:12:08.625 "state": "configuring", 00:12:08.625 "raid_level": "raid0", 00:12:08.625 "superblock": false, 00:12:08.625 "num_base_bdevs": 3, 00:12:08.625 "num_base_bdevs_discovered": 1, 00:12:08.625 "num_base_bdevs_operational": 3, 00:12:08.625 "base_bdevs_list": [ 00:12:08.625 { 00:12:08.625 "name": "BaseBdev1", 00:12:08.625 "uuid": "ba73cc3c-02d2-4e43-a86a-c0be1a395f33", 00:12:08.625 "is_configured": true, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 65536 00:12:08.625 }, 00:12:08.625 { 00:12:08.625 "name": "BaseBdev2", 00:12:08.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.625 "is_configured": false, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 0 00:12:08.625 }, 00:12:08.625 { 00:12:08.625 "name": "BaseBdev3", 00:12:08.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.625 "is_configured": false, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 0 00:12:08.625 } 00:12:08.625 ] 00:12:08.625 }' 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.625 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.195 [2024-11-27 14:11:39.520095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.195 [2024-11-27 14:11:39.520157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.195 [2024-11-27 14:11:39.528140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.195 [2024-11-27 14:11:39.530693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.195 [2024-11-27 14:11:39.530750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.195 [2024-11-27 14:11:39.530768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.195 [2024-11-27 14:11:39.530785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.195 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.195 "name": "Existed_Raid", 00:12:09.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.195 "strip_size_kb": 64, 00:12:09.195 "state": "configuring", 00:12:09.195 "raid_level": "raid0", 00:12:09.195 "superblock": false, 00:12:09.195 "num_base_bdevs": 3, 00:12:09.195 "num_base_bdevs_discovered": 1, 00:12:09.195 "num_base_bdevs_operational": 3, 00:12:09.195 "base_bdevs_list": [ 00:12:09.195 { 00:12:09.195 "name": "BaseBdev1", 00:12:09.195 "uuid": "ba73cc3c-02d2-4e43-a86a-c0be1a395f33", 00:12:09.195 "is_configured": true, 00:12:09.195 "data_offset": 0, 00:12:09.195 "data_size": 65536 00:12:09.195 }, 00:12:09.195 { 00:12:09.195 "name": "BaseBdev2", 00:12:09.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.195 "is_configured": false, 00:12:09.195 "data_offset": 0, 00:12:09.195 "data_size": 0 00:12:09.195 }, 00:12:09.195 { 00:12:09.195 "name": "BaseBdev3", 00:12:09.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.195 "is_configured": false, 00:12:09.195 "data_offset": 0, 00:12:09.195 "data_size": 0 00:12:09.195 } 00:12:09.195 ] 00:12:09.195 }' 00:12:09.196 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.196 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 [2024-11-27 14:11:40.095215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.761 BaseBdev2 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.761 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 [ 00:12:09.761 { 00:12:09.761 "name": "BaseBdev2", 00:12:09.761 "aliases": [ 00:12:09.762 "0c6aab7c-2827-44a5-8cba-bf5c4624df0b" 00:12:09.762 ], 00:12:09.762 "product_name": "Malloc disk", 00:12:09.762 "block_size": 512, 00:12:09.762 "num_blocks": 65536, 00:12:09.762 "uuid": "0c6aab7c-2827-44a5-8cba-bf5c4624df0b", 00:12:09.762 "assigned_rate_limits": { 00:12:09.762 "rw_ios_per_sec": 0, 00:12:09.762 "rw_mbytes_per_sec": 0, 00:12:09.762 "r_mbytes_per_sec": 0, 00:12:09.762 "w_mbytes_per_sec": 0 00:12:09.762 }, 00:12:09.762 "claimed": true, 00:12:09.762 "claim_type": "exclusive_write", 00:12:09.762 "zoned": false, 00:12:09.762 "supported_io_types": { 00:12:09.762 "read": true, 00:12:09.762 "write": true, 00:12:09.762 "unmap": true, 00:12:09.762 "flush": true, 00:12:09.762 "reset": true, 00:12:09.762 "nvme_admin": false, 00:12:09.762 "nvme_io": false, 00:12:09.762 "nvme_io_md": false, 00:12:09.762 "write_zeroes": true, 00:12:09.762 "zcopy": true, 00:12:09.762 "get_zone_info": false, 00:12:09.762 "zone_management": false, 00:12:09.762 "zone_append": false, 00:12:09.762 "compare": false, 00:12:09.762 "compare_and_write": false, 00:12:09.762 "abort": true, 00:12:09.762 "seek_hole": false, 00:12:09.762 "seek_data": false, 00:12:09.762 "copy": true, 00:12:09.762 "nvme_iov_md": false 00:12:09.762 }, 00:12:09.762 "memory_domains": [ 00:12:09.762 { 00:12:09.762 "dma_device_id": "system", 00:12:09.762 "dma_device_type": 1 00:12:09.762 }, 00:12:09.762 { 00:12:09.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.762 "dma_device_type": 2 00:12:09.762 } 00:12:09.762 ], 00:12:09.762 "driver_specific": {} 00:12:09.762 } 00:12:09.762 ] 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.762 "name": "Existed_Raid", 00:12:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.762 "strip_size_kb": 64, 00:12:09.762 "state": "configuring", 00:12:09.762 "raid_level": "raid0", 00:12:09.762 "superblock": false, 00:12:09.762 "num_base_bdevs": 3, 00:12:09.762 "num_base_bdevs_discovered": 2, 00:12:09.762 "num_base_bdevs_operational": 3, 00:12:09.762 "base_bdevs_list": [ 00:12:09.762 { 00:12:09.762 "name": "BaseBdev1", 00:12:09.762 "uuid": "ba73cc3c-02d2-4e43-a86a-c0be1a395f33", 00:12:09.762 "is_configured": true, 00:12:09.762 "data_offset": 0, 00:12:09.762 "data_size": 65536 00:12:09.762 }, 00:12:09.762 { 00:12:09.762 "name": "BaseBdev2", 00:12:09.762 "uuid": "0c6aab7c-2827-44a5-8cba-bf5c4624df0b", 00:12:09.762 "is_configured": true, 00:12:09.762 "data_offset": 0, 00:12:09.762 "data_size": 65536 00:12:09.762 }, 00:12:09.762 { 00:12:09.762 "name": "BaseBdev3", 00:12:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.762 "is_configured": false, 00:12:09.762 "data_offset": 0, 00:12:09.762 "data_size": 0 00:12:09.762 } 00:12:09.762 ] 00:12:09.762 }' 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.762 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.349 [2024-11-27 14:11:40.688451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.349 [2024-11-27 14:11:40.688505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:10.349 [2024-11-27 14:11:40.688526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:10.349 [2024-11-27 14:11:40.688885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.349 [2024-11-27 14:11:40.689119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:10.349 [2024-11-27 14:11:40.689137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:10.349 [2024-11-27 14:11:40.689455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.349 BaseBdev3 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.349 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.349 [ 00:12:10.349 { 00:12:10.349 "name": "BaseBdev3", 00:12:10.349 "aliases": [ 00:12:10.349 "b8cde64a-3e95-4d19-ba48-8bef1fee7035" 00:12:10.349 ], 00:12:10.349 "product_name": "Malloc disk", 00:12:10.349 "block_size": 512, 00:12:10.349 "num_blocks": 65536, 00:12:10.349 "uuid": "b8cde64a-3e95-4d19-ba48-8bef1fee7035", 00:12:10.349 "assigned_rate_limits": { 00:12:10.349 "rw_ios_per_sec": 0, 00:12:10.349 "rw_mbytes_per_sec": 0, 00:12:10.349 "r_mbytes_per_sec": 0, 00:12:10.349 "w_mbytes_per_sec": 0 00:12:10.350 }, 00:12:10.350 "claimed": true, 00:12:10.350 "claim_type": "exclusive_write", 00:12:10.350 "zoned": false, 00:12:10.350 "supported_io_types": { 00:12:10.350 "read": true, 00:12:10.350 "write": true, 00:12:10.350 "unmap": true, 00:12:10.350 "flush": true, 00:12:10.350 "reset": true, 00:12:10.350 "nvme_admin": false, 00:12:10.350 "nvme_io": false, 00:12:10.350 "nvme_io_md": false, 00:12:10.350 "write_zeroes": true, 00:12:10.350 "zcopy": true, 00:12:10.350 "get_zone_info": false, 00:12:10.350 "zone_management": false, 00:12:10.350 "zone_append": false, 00:12:10.350 "compare": false, 00:12:10.350 "compare_and_write": false, 00:12:10.350 "abort": true, 00:12:10.350 "seek_hole": false, 00:12:10.350 "seek_data": false, 00:12:10.350 "copy": true, 00:12:10.350 "nvme_iov_md": false 00:12:10.350 }, 00:12:10.350 "memory_domains": [ 00:12:10.350 { 00:12:10.350 "dma_device_id": "system", 00:12:10.350 "dma_device_type": 1 00:12:10.350 }, 00:12:10.350 { 00:12:10.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.350 "dma_device_type": 2 00:12:10.350 } 00:12:10.350 ], 00:12:10.350 "driver_specific": {} 00:12:10.350 } 00:12:10.350 ] 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.350 "name": "Existed_Raid", 00:12:10.350 "uuid": "cb23e676-cb92-4cef-a19e-8846f07bc11f", 00:12:10.350 "strip_size_kb": 64, 00:12:10.350 "state": "online", 00:12:10.350 "raid_level": "raid0", 00:12:10.350 "superblock": false, 00:12:10.350 "num_base_bdevs": 3, 00:12:10.350 "num_base_bdevs_discovered": 3, 00:12:10.350 "num_base_bdevs_operational": 3, 00:12:10.350 "base_bdevs_list": [ 00:12:10.350 { 00:12:10.350 "name": "BaseBdev1", 00:12:10.350 "uuid": "ba73cc3c-02d2-4e43-a86a-c0be1a395f33", 00:12:10.350 "is_configured": true, 00:12:10.350 "data_offset": 0, 00:12:10.350 "data_size": 65536 00:12:10.350 }, 00:12:10.350 { 00:12:10.350 "name": "BaseBdev2", 00:12:10.350 "uuid": "0c6aab7c-2827-44a5-8cba-bf5c4624df0b", 00:12:10.350 "is_configured": true, 00:12:10.350 "data_offset": 0, 00:12:10.350 "data_size": 65536 00:12:10.350 }, 00:12:10.350 { 00:12:10.350 "name": "BaseBdev3", 00:12:10.350 "uuid": "b8cde64a-3e95-4d19-ba48-8bef1fee7035", 00:12:10.350 "is_configured": true, 00:12:10.350 "data_offset": 0, 00:12:10.350 "data_size": 65536 00:12:10.350 } 00:12:10.350 ] 00:12:10.350 }' 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.350 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 [2024-11-27 14:11:41.253073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.916 "name": "Existed_Raid", 00:12:10.916 "aliases": [ 00:12:10.916 "cb23e676-cb92-4cef-a19e-8846f07bc11f" 00:12:10.916 ], 00:12:10.916 "product_name": "Raid Volume", 00:12:10.916 "block_size": 512, 00:12:10.916 "num_blocks": 196608, 00:12:10.916 "uuid": "cb23e676-cb92-4cef-a19e-8846f07bc11f", 00:12:10.916 "assigned_rate_limits": { 00:12:10.916 "rw_ios_per_sec": 0, 00:12:10.916 "rw_mbytes_per_sec": 0, 00:12:10.916 "r_mbytes_per_sec": 0, 00:12:10.916 "w_mbytes_per_sec": 0 00:12:10.916 }, 00:12:10.916 "claimed": false, 00:12:10.916 "zoned": false, 00:12:10.916 "supported_io_types": { 00:12:10.916 "read": true, 00:12:10.916 "write": true, 00:12:10.916 "unmap": true, 00:12:10.916 "flush": true, 00:12:10.916 "reset": true, 00:12:10.916 "nvme_admin": false, 00:12:10.916 "nvme_io": false, 00:12:10.916 "nvme_io_md": false, 00:12:10.916 "write_zeroes": true, 00:12:10.916 "zcopy": false, 00:12:10.916 "get_zone_info": false, 00:12:10.916 "zone_management": false, 00:12:10.916 "zone_append": false, 00:12:10.916 "compare": false, 00:12:10.916 "compare_and_write": false, 00:12:10.916 "abort": false, 00:12:10.916 "seek_hole": false, 00:12:10.916 "seek_data": false, 00:12:10.916 "copy": false, 00:12:10.916 "nvme_iov_md": false 00:12:10.916 }, 00:12:10.916 "memory_domains": [ 00:12:10.916 { 00:12:10.916 "dma_device_id": "system", 00:12:10.916 "dma_device_type": 1 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.916 "dma_device_type": 2 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "dma_device_id": "system", 00:12:10.916 "dma_device_type": 1 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.916 "dma_device_type": 2 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "dma_device_id": "system", 00:12:10.916 "dma_device_type": 1 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.916 "dma_device_type": 2 00:12:10.916 } 00:12:10.916 ], 00:12:10.916 "driver_specific": { 00:12:10.916 "raid": { 00:12:10.916 "uuid": "cb23e676-cb92-4cef-a19e-8846f07bc11f", 00:12:10.916 "strip_size_kb": 64, 00:12:10.916 "state": "online", 00:12:10.916 "raid_level": "raid0", 00:12:10.916 "superblock": false, 00:12:10.916 "num_base_bdevs": 3, 00:12:10.916 "num_base_bdevs_discovered": 3, 00:12:10.916 "num_base_bdevs_operational": 3, 00:12:10.916 "base_bdevs_list": [ 00:12:10.916 { 00:12:10.916 "name": "BaseBdev1", 00:12:10.916 "uuid": "ba73cc3c-02d2-4e43-a86a-c0be1a395f33", 00:12:10.916 "is_configured": true, 00:12:10.916 "data_offset": 0, 00:12:10.916 "data_size": 65536 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "name": "BaseBdev2", 00:12:10.916 "uuid": "0c6aab7c-2827-44a5-8cba-bf5c4624df0b", 00:12:10.916 "is_configured": true, 00:12:10.916 "data_offset": 0, 00:12:10.916 "data_size": 65536 00:12:10.916 }, 00:12:10.916 { 00:12:10.916 "name": "BaseBdev3", 00:12:10.916 "uuid": "b8cde64a-3e95-4d19-ba48-8bef1fee7035", 00:12:10.916 "is_configured": true, 00:12:10.916 "data_offset": 0, 00:12:10.916 "data_size": 65536 00:12:10.916 } 00:12:10.916 ] 00:12:10.916 } 00:12:10.916 } 00:12:10.916 }' 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:10.916 BaseBdev2 00:12:10.916 BaseBdev3' 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:10.916 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.917 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.917 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.173 [2024-11-27 14:11:41.596781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.173 [2024-11-27 14:11:41.596950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.173 [2024-11-27 14:11:41.597163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:11.173 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:11.174 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.174 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.430 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.431 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.431 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.431 "name": "Existed_Raid", 00:12:11.431 "uuid": "cb23e676-cb92-4cef-a19e-8846f07bc11f", 00:12:11.431 "strip_size_kb": 64, 00:12:11.431 "state": "offline", 00:12:11.431 "raid_level": "raid0", 00:12:11.431 "superblock": false, 00:12:11.431 "num_base_bdevs": 3, 00:12:11.431 "num_base_bdevs_discovered": 2, 00:12:11.431 "num_base_bdevs_operational": 2, 00:12:11.431 "base_bdevs_list": [ 00:12:11.431 { 00:12:11.431 "name": null, 00:12:11.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.431 "is_configured": false, 00:12:11.431 "data_offset": 0, 00:12:11.431 "data_size": 65536 00:12:11.431 }, 00:12:11.431 { 00:12:11.431 "name": "BaseBdev2", 00:12:11.431 "uuid": "0c6aab7c-2827-44a5-8cba-bf5c4624df0b", 00:12:11.431 "is_configured": true, 00:12:11.431 "data_offset": 0, 00:12:11.431 "data_size": 65536 00:12:11.431 }, 00:12:11.431 { 00:12:11.431 "name": "BaseBdev3", 00:12:11.431 "uuid": "b8cde64a-3e95-4d19-ba48-8bef1fee7035", 00:12:11.431 "is_configured": true, 00:12:11.431 "data_offset": 0, 00:12:11.431 "data_size": 65536 00:12:11.431 } 00:12:11.431 ] 00:12:11.431 }' 00:12:11.431 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.431 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 [2024-11-27 14:11:42.257379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 [2024-11-27 14:11:42.401276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.997 [2024-11-27 14:11:42.401339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:11.997 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.255 BaseBdev2 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.255 [ 00:12:12.255 { 00:12:12.255 "name": "BaseBdev2", 00:12:12.255 "aliases": [ 00:12:12.255 "c84cf453-4b46-492d-911d-610548e3d907" 00:12:12.255 ], 00:12:12.255 "product_name": "Malloc disk", 00:12:12.255 "block_size": 512, 00:12:12.255 "num_blocks": 65536, 00:12:12.255 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:12.255 "assigned_rate_limits": { 00:12:12.255 "rw_ios_per_sec": 0, 00:12:12.255 "rw_mbytes_per_sec": 0, 00:12:12.255 "r_mbytes_per_sec": 0, 00:12:12.255 "w_mbytes_per_sec": 0 00:12:12.255 }, 00:12:12.255 "claimed": false, 00:12:12.255 "zoned": false, 00:12:12.255 "supported_io_types": { 00:12:12.255 "read": true, 00:12:12.255 "write": true, 00:12:12.255 "unmap": true, 00:12:12.255 "flush": true, 00:12:12.255 "reset": true, 00:12:12.255 "nvme_admin": false, 00:12:12.255 "nvme_io": false, 00:12:12.255 "nvme_io_md": false, 00:12:12.255 "write_zeroes": true, 00:12:12.255 "zcopy": true, 00:12:12.255 "get_zone_info": false, 00:12:12.255 "zone_management": false, 00:12:12.255 "zone_append": false, 00:12:12.255 "compare": false, 00:12:12.255 "compare_and_write": false, 00:12:12.255 "abort": true, 00:12:12.255 "seek_hole": false, 00:12:12.255 "seek_data": false, 00:12:12.255 "copy": true, 00:12:12.255 "nvme_iov_md": false 00:12:12.255 }, 00:12:12.255 "memory_domains": [ 00:12:12.255 { 00:12:12.255 "dma_device_id": "system", 00:12:12.255 "dma_device_type": 1 00:12:12.255 }, 00:12:12.255 { 00:12:12.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.255 "dma_device_type": 2 00:12:12.255 } 00:12:12.255 ], 00:12:12.255 "driver_specific": {} 00:12:12.255 } 00:12:12.255 ] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.255 BaseBdev3 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.255 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.256 [ 00:12:12.256 { 00:12:12.256 "name": "BaseBdev3", 00:12:12.256 "aliases": [ 00:12:12.256 "865440cd-6406-46c2-b736-8b54c8df4efe" 00:12:12.256 ], 00:12:12.256 "product_name": "Malloc disk", 00:12:12.256 "block_size": 512, 00:12:12.256 "num_blocks": 65536, 00:12:12.256 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:12.256 "assigned_rate_limits": { 00:12:12.256 "rw_ios_per_sec": 0, 00:12:12.256 "rw_mbytes_per_sec": 0, 00:12:12.256 "r_mbytes_per_sec": 0, 00:12:12.256 "w_mbytes_per_sec": 0 00:12:12.256 }, 00:12:12.256 "claimed": false, 00:12:12.256 "zoned": false, 00:12:12.256 "supported_io_types": { 00:12:12.256 "read": true, 00:12:12.256 "write": true, 00:12:12.256 "unmap": true, 00:12:12.256 "flush": true, 00:12:12.256 "reset": true, 00:12:12.256 "nvme_admin": false, 00:12:12.256 "nvme_io": false, 00:12:12.256 "nvme_io_md": false, 00:12:12.256 "write_zeroes": true, 00:12:12.256 "zcopy": true, 00:12:12.256 "get_zone_info": false, 00:12:12.256 "zone_management": false, 00:12:12.256 "zone_append": false, 00:12:12.256 "compare": false, 00:12:12.256 "compare_and_write": false, 00:12:12.256 "abort": true, 00:12:12.256 "seek_hole": false, 00:12:12.256 "seek_data": false, 00:12:12.256 "copy": true, 00:12:12.256 "nvme_iov_md": false 00:12:12.256 }, 00:12:12.256 "memory_domains": [ 00:12:12.256 { 00:12:12.256 "dma_device_id": "system", 00:12:12.256 "dma_device_type": 1 00:12:12.256 }, 00:12:12.256 { 00:12:12.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.256 "dma_device_type": 2 00:12:12.256 } 00:12:12.256 ], 00:12:12.256 "driver_specific": {} 00:12:12.256 } 00:12:12.256 ] 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.256 [2024-11-27 14:11:42.698535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.256 [2024-11-27 14:11:42.698590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.256 [2024-11-27 14:11:42.698622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.256 [2024-11-27 14:11:42.701002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.256 "name": "Existed_Raid", 00:12:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.256 "strip_size_kb": 64, 00:12:12.256 "state": "configuring", 00:12:12.256 "raid_level": "raid0", 00:12:12.256 "superblock": false, 00:12:12.256 "num_base_bdevs": 3, 00:12:12.256 "num_base_bdevs_discovered": 2, 00:12:12.256 "num_base_bdevs_operational": 3, 00:12:12.256 "base_bdevs_list": [ 00:12:12.256 { 00:12:12.256 "name": "BaseBdev1", 00:12:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.256 "is_configured": false, 00:12:12.256 "data_offset": 0, 00:12:12.256 "data_size": 0 00:12:12.256 }, 00:12:12.256 { 00:12:12.256 "name": "BaseBdev2", 00:12:12.256 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:12.256 "is_configured": true, 00:12:12.256 "data_offset": 0, 00:12:12.256 "data_size": 65536 00:12:12.256 }, 00:12:12.256 { 00:12:12.256 "name": "BaseBdev3", 00:12:12.256 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:12.256 "is_configured": true, 00:12:12.256 "data_offset": 0, 00:12:12.256 "data_size": 65536 00:12:12.256 } 00:12:12.256 ] 00:12:12.256 }' 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.256 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.821 [2024-11-27 14:11:43.210712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.821 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.821 "name": "Existed_Raid", 00:12:12.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.821 "strip_size_kb": 64, 00:12:12.821 "state": "configuring", 00:12:12.821 "raid_level": "raid0", 00:12:12.821 "superblock": false, 00:12:12.821 "num_base_bdevs": 3, 00:12:12.821 "num_base_bdevs_discovered": 1, 00:12:12.821 "num_base_bdevs_operational": 3, 00:12:12.821 "base_bdevs_list": [ 00:12:12.821 { 00:12:12.821 "name": "BaseBdev1", 00:12:12.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.822 "is_configured": false, 00:12:12.822 "data_offset": 0, 00:12:12.822 "data_size": 0 00:12:12.822 }, 00:12:12.822 { 00:12:12.822 "name": null, 00:12:12.822 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:12.822 "is_configured": false, 00:12:12.822 "data_offset": 0, 00:12:12.822 "data_size": 65536 00:12:12.822 }, 00:12:12.822 { 00:12:12.822 "name": "BaseBdev3", 00:12:12.822 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:12.822 "is_configured": true, 00:12:12.822 "data_offset": 0, 00:12:12.822 "data_size": 65536 00:12:12.822 } 00:12:12.822 ] 00:12:12.822 }' 00:12:12.822 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.822 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.446 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 [2024-11-27 14:11:43.860858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.447 BaseBdev1 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 [ 00:12:13.447 { 00:12:13.447 "name": "BaseBdev1", 00:12:13.447 "aliases": [ 00:12:13.447 "eb88ba32-c7f2-4d4f-8f45-700713f692b9" 00:12:13.447 ], 00:12:13.447 "product_name": "Malloc disk", 00:12:13.447 "block_size": 512, 00:12:13.447 "num_blocks": 65536, 00:12:13.447 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:13.447 "assigned_rate_limits": { 00:12:13.447 "rw_ios_per_sec": 0, 00:12:13.447 "rw_mbytes_per_sec": 0, 00:12:13.447 "r_mbytes_per_sec": 0, 00:12:13.447 "w_mbytes_per_sec": 0 00:12:13.447 }, 00:12:13.447 "claimed": true, 00:12:13.447 "claim_type": "exclusive_write", 00:12:13.447 "zoned": false, 00:12:13.447 "supported_io_types": { 00:12:13.447 "read": true, 00:12:13.447 "write": true, 00:12:13.447 "unmap": true, 00:12:13.447 "flush": true, 00:12:13.447 "reset": true, 00:12:13.447 "nvme_admin": false, 00:12:13.447 "nvme_io": false, 00:12:13.447 "nvme_io_md": false, 00:12:13.447 "write_zeroes": true, 00:12:13.447 "zcopy": true, 00:12:13.447 "get_zone_info": false, 00:12:13.447 "zone_management": false, 00:12:13.447 "zone_append": false, 00:12:13.447 "compare": false, 00:12:13.447 "compare_and_write": false, 00:12:13.447 "abort": true, 00:12:13.447 "seek_hole": false, 00:12:13.447 "seek_data": false, 00:12:13.447 "copy": true, 00:12:13.447 "nvme_iov_md": false 00:12:13.447 }, 00:12:13.447 "memory_domains": [ 00:12:13.447 { 00:12:13.447 "dma_device_id": "system", 00:12:13.447 "dma_device_type": 1 00:12:13.447 }, 00:12:13.447 { 00:12:13.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.447 "dma_device_type": 2 00:12:13.447 } 00:12:13.447 ], 00:12:13.447 "driver_specific": {} 00:12:13.447 } 00:12:13.447 ] 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.447 "name": "Existed_Raid", 00:12:13.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.447 "strip_size_kb": 64, 00:12:13.447 "state": "configuring", 00:12:13.447 "raid_level": "raid0", 00:12:13.447 "superblock": false, 00:12:13.447 "num_base_bdevs": 3, 00:12:13.447 "num_base_bdevs_discovered": 2, 00:12:13.447 "num_base_bdevs_operational": 3, 00:12:13.447 "base_bdevs_list": [ 00:12:13.447 { 00:12:13.447 "name": "BaseBdev1", 00:12:13.447 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:13.447 "is_configured": true, 00:12:13.447 "data_offset": 0, 00:12:13.447 "data_size": 65536 00:12:13.447 }, 00:12:13.447 { 00:12:13.447 "name": null, 00:12:13.447 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:13.447 "is_configured": false, 00:12:13.447 "data_offset": 0, 00:12:13.447 "data_size": 65536 00:12:13.447 }, 00:12:13.447 { 00:12:13.447 "name": "BaseBdev3", 00:12:13.447 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:13.447 "is_configured": true, 00:12:13.447 "data_offset": 0, 00:12:13.447 "data_size": 65536 00:12:13.447 } 00:12:13.447 ] 00:12:13.447 }' 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.447 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.014 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.014 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.014 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.014 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.014 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.014 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.015 [2024-11-27 14:11:44.461031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.015 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.272 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.272 "name": "Existed_Raid", 00:12:14.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.272 "strip_size_kb": 64, 00:12:14.272 "state": "configuring", 00:12:14.272 "raid_level": "raid0", 00:12:14.272 "superblock": false, 00:12:14.272 "num_base_bdevs": 3, 00:12:14.272 "num_base_bdevs_discovered": 1, 00:12:14.272 "num_base_bdevs_operational": 3, 00:12:14.272 "base_bdevs_list": [ 00:12:14.272 { 00:12:14.272 "name": "BaseBdev1", 00:12:14.272 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:14.272 "is_configured": true, 00:12:14.272 "data_offset": 0, 00:12:14.272 "data_size": 65536 00:12:14.272 }, 00:12:14.272 { 00:12:14.272 "name": null, 00:12:14.272 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:14.272 "is_configured": false, 00:12:14.272 "data_offset": 0, 00:12:14.272 "data_size": 65536 00:12:14.272 }, 00:12:14.272 { 00:12:14.272 "name": null, 00:12:14.272 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:14.272 "is_configured": false, 00:12:14.272 "data_offset": 0, 00:12:14.272 "data_size": 65536 00:12:14.272 } 00:12:14.272 ] 00:12:14.272 }' 00:12:14.272 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.272 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.531 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.531 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.531 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.531 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.531 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.789 [2024-11-27 14:11:45.049254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.789 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.789 "name": "Existed_Raid", 00:12:14.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.789 "strip_size_kb": 64, 00:12:14.789 "state": "configuring", 00:12:14.789 "raid_level": "raid0", 00:12:14.789 "superblock": false, 00:12:14.789 "num_base_bdevs": 3, 00:12:14.789 "num_base_bdevs_discovered": 2, 00:12:14.789 "num_base_bdevs_operational": 3, 00:12:14.789 "base_bdevs_list": [ 00:12:14.789 { 00:12:14.789 "name": "BaseBdev1", 00:12:14.789 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:14.789 "is_configured": true, 00:12:14.789 "data_offset": 0, 00:12:14.789 "data_size": 65536 00:12:14.789 }, 00:12:14.789 { 00:12:14.789 "name": null, 00:12:14.789 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:14.789 "is_configured": false, 00:12:14.789 "data_offset": 0, 00:12:14.789 "data_size": 65536 00:12:14.789 }, 00:12:14.789 { 00:12:14.790 "name": "BaseBdev3", 00:12:14.790 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:14.790 "is_configured": true, 00:12:14.790 "data_offset": 0, 00:12:14.790 "data_size": 65536 00:12:14.790 } 00:12:14.790 ] 00:12:14.790 }' 00:12:14.790 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.790 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.049 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:15.049 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.049 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 [2024-11-27 14:11:45.613413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.308 "name": "Existed_Raid", 00:12:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.308 "strip_size_kb": 64, 00:12:15.308 "state": "configuring", 00:12:15.308 "raid_level": "raid0", 00:12:15.308 "superblock": false, 00:12:15.308 "num_base_bdevs": 3, 00:12:15.308 "num_base_bdevs_discovered": 1, 00:12:15.308 "num_base_bdevs_operational": 3, 00:12:15.308 "base_bdevs_list": [ 00:12:15.308 { 00:12:15.308 "name": null, 00:12:15.308 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:15.308 "is_configured": false, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 65536 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": null, 00:12:15.308 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:15.308 "is_configured": false, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 65536 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": "BaseBdev3", 00:12:15.308 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:15.308 "is_configured": true, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 65536 00:12:15.308 } 00:12:15.308 ] 00:12:15.308 }' 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.308 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 [2024-11-27 14:11:46.266382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:15.874 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.875 "name": "Existed_Raid", 00:12:15.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.875 "strip_size_kb": 64, 00:12:15.875 "state": "configuring", 00:12:15.875 "raid_level": "raid0", 00:12:15.875 "superblock": false, 00:12:15.875 "num_base_bdevs": 3, 00:12:15.875 "num_base_bdevs_discovered": 2, 00:12:15.875 "num_base_bdevs_operational": 3, 00:12:15.875 "base_bdevs_list": [ 00:12:15.875 { 00:12:15.875 "name": null, 00:12:15.875 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:15.875 "is_configured": false, 00:12:15.875 "data_offset": 0, 00:12:15.875 "data_size": 65536 00:12:15.875 }, 00:12:15.875 { 00:12:15.875 "name": "BaseBdev2", 00:12:15.875 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:15.875 "is_configured": true, 00:12:15.875 "data_offset": 0, 00:12:15.875 "data_size": 65536 00:12:15.875 }, 00:12:15.875 { 00:12:15.875 "name": "BaseBdev3", 00:12:15.875 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:15.875 "is_configured": true, 00:12:15.875 "data_offset": 0, 00:12:15.875 "data_size": 65536 00:12:15.875 } 00:12:15.875 ] 00:12:15.875 }' 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.875 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.439 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb88ba32-c7f2-4d4f-8f45-700713f692b9 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.440 [2024-11-27 14:11:46.929687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:16.440 [2024-11-27 14:11:46.929738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:16.440 [2024-11-27 14:11:46.929754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:16.440 [2024-11-27 14:11:46.930207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:16.440 [2024-11-27 14:11:46.930426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:16.440 [2024-11-27 14:11:46.930444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:16.440 [2024-11-27 14:11:46.930742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.440 NewBaseBdev 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.440 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.440 [ 00:12:16.440 { 00:12:16.440 "name": "NewBaseBdev", 00:12:16.440 "aliases": [ 00:12:16.440 "eb88ba32-c7f2-4d4f-8f45-700713f692b9" 00:12:16.699 ], 00:12:16.699 "product_name": "Malloc disk", 00:12:16.699 "block_size": 512, 00:12:16.699 "num_blocks": 65536, 00:12:16.699 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:16.699 "assigned_rate_limits": { 00:12:16.699 "rw_ios_per_sec": 0, 00:12:16.699 "rw_mbytes_per_sec": 0, 00:12:16.699 "r_mbytes_per_sec": 0, 00:12:16.699 "w_mbytes_per_sec": 0 00:12:16.699 }, 00:12:16.699 "claimed": true, 00:12:16.699 "claim_type": "exclusive_write", 00:12:16.699 "zoned": false, 00:12:16.699 "supported_io_types": { 00:12:16.699 "read": true, 00:12:16.699 "write": true, 00:12:16.699 "unmap": true, 00:12:16.699 "flush": true, 00:12:16.699 "reset": true, 00:12:16.699 "nvme_admin": false, 00:12:16.699 "nvme_io": false, 00:12:16.699 "nvme_io_md": false, 00:12:16.699 "write_zeroes": true, 00:12:16.699 "zcopy": true, 00:12:16.699 "get_zone_info": false, 00:12:16.699 "zone_management": false, 00:12:16.699 "zone_append": false, 00:12:16.699 "compare": false, 00:12:16.699 "compare_and_write": false, 00:12:16.699 "abort": true, 00:12:16.699 "seek_hole": false, 00:12:16.699 "seek_data": false, 00:12:16.699 "copy": true, 00:12:16.699 "nvme_iov_md": false 00:12:16.699 }, 00:12:16.699 "memory_domains": [ 00:12:16.699 { 00:12:16.699 "dma_device_id": "system", 00:12:16.699 "dma_device_type": 1 00:12:16.699 }, 00:12:16.699 { 00:12:16.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.699 "dma_device_type": 2 00:12:16.699 } 00:12:16.699 ], 00:12:16.699 "driver_specific": {} 00:12:16.699 } 00:12:16.699 ] 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.699 14:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.699 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.699 "name": "Existed_Raid", 00:12:16.699 "uuid": "189144eb-2300-4982-9cab-30b9d56e4573", 00:12:16.699 "strip_size_kb": 64, 00:12:16.699 "state": "online", 00:12:16.699 "raid_level": "raid0", 00:12:16.699 "superblock": false, 00:12:16.699 "num_base_bdevs": 3, 00:12:16.699 "num_base_bdevs_discovered": 3, 00:12:16.699 "num_base_bdevs_operational": 3, 00:12:16.699 "base_bdevs_list": [ 00:12:16.699 { 00:12:16.699 "name": "NewBaseBdev", 00:12:16.699 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:16.699 "is_configured": true, 00:12:16.699 "data_offset": 0, 00:12:16.699 "data_size": 65536 00:12:16.699 }, 00:12:16.699 { 00:12:16.699 "name": "BaseBdev2", 00:12:16.699 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:16.699 "is_configured": true, 00:12:16.699 "data_offset": 0, 00:12:16.699 "data_size": 65536 00:12:16.699 }, 00:12:16.699 { 00:12:16.699 "name": "BaseBdev3", 00:12:16.699 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:16.699 "is_configured": true, 00:12:16.699 "data_offset": 0, 00:12:16.699 "data_size": 65536 00:12:16.699 } 00:12:16.699 ] 00:12:16.699 }' 00:12:16.699 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.699 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.958 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.958 [2024-11-27 14:11:47.454333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.217 "name": "Existed_Raid", 00:12:17.217 "aliases": [ 00:12:17.217 "189144eb-2300-4982-9cab-30b9d56e4573" 00:12:17.217 ], 00:12:17.217 "product_name": "Raid Volume", 00:12:17.217 "block_size": 512, 00:12:17.217 "num_blocks": 196608, 00:12:17.217 "uuid": "189144eb-2300-4982-9cab-30b9d56e4573", 00:12:17.217 "assigned_rate_limits": { 00:12:17.217 "rw_ios_per_sec": 0, 00:12:17.217 "rw_mbytes_per_sec": 0, 00:12:17.217 "r_mbytes_per_sec": 0, 00:12:17.217 "w_mbytes_per_sec": 0 00:12:17.217 }, 00:12:17.217 "claimed": false, 00:12:17.217 "zoned": false, 00:12:17.217 "supported_io_types": { 00:12:17.217 "read": true, 00:12:17.217 "write": true, 00:12:17.217 "unmap": true, 00:12:17.217 "flush": true, 00:12:17.217 "reset": true, 00:12:17.217 "nvme_admin": false, 00:12:17.217 "nvme_io": false, 00:12:17.217 "nvme_io_md": false, 00:12:17.217 "write_zeroes": true, 00:12:17.217 "zcopy": false, 00:12:17.217 "get_zone_info": false, 00:12:17.217 "zone_management": false, 00:12:17.217 "zone_append": false, 00:12:17.217 "compare": false, 00:12:17.217 "compare_and_write": false, 00:12:17.217 "abort": false, 00:12:17.217 "seek_hole": false, 00:12:17.217 "seek_data": false, 00:12:17.217 "copy": false, 00:12:17.217 "nvme_iov_md": false 00:12:17.217 }, 00:12:17.217 "memory_domains": [ 00:12:17.217 { 00:12:17.217 "dma_device_id": "system", 00:12:17.217 "dma_device_type": 1 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.217 "dma_device_type": 2 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "dma_device_id": "system", 00:12:17.217 "dma_device_type": 1 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.217 "dma_device_type": 2 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "dma_device_id": "system", 00:12:17.217 "dma_device_type": 1 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.217 "dma_device_type": 2 00:12:17.217 } 00:12:17.217 ], 00:12:17.217 "driver_specific": { 00:12:17.217 "raid": { 00:12:17.217 "uuid": "189144eb-2300-4982-9cab-30b9d56e4573", 00:12:17.217 "strip_size_kb": 64, 00:12:17.217 "state": "online", 00:12:17.217 "raid_level": "raid0", 00:12:17.217 "superblock": false, 00:12:17.217 "num_base_bdevs": 3, 00:12:17.217 "num_base_bdevs_discovered": 3, 00:12:17.217 "num_base_bdevs_operational": 3, 00:12:17.217 "base_bdevs_list": [ 00:12:17.217 { 00:12:17.217 "name": "NewBaseBdev", 00:12:17.217 "uuid": "eb88ba32-c7f2-4d4f-8f45-700713f692b9", 00:12:17.217 "is_configured": true, 00:12:17.217 "data_offset": 0, 00:12:17.217 "data_size": 65536 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "name": "BaseBdev2", 00:12:17.217 "uuid": "c84cf453-4b46-492d-911d-610548e3d907", 00:12:17.217 "is_configured": true, 00:12:17.217 "data_offset": 0, 00:12:17.217 "data_size": 65536 00:12:17.217 }, 00:12:17.217 { 00:12:17.217 "name": "BaseBdev3", 00:12:17.217 "uuid": "865440cd-6406-46c2-b736-8b54c8df4efe", 00:12:17.217 "is_configured": true, 00:12:17.217 "data_offset": 0, 00:12:17.217 "data_size": 65536 00:12:17.217 } 00:12:17.217 ] 00:12:17.217 } 00:12:17.217 } 00:12:17.217 }' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:17.217 BaseBdev2 00:12:17.217 BaseBdev3' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.217 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.475 [2024-11-27 14:11:47.761999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.475 [2024-11-27 14:11:47.762036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.475 [2024-11-27 14:11:47.762132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.475 [2024-11-27 14:11:47.762206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.475 [2024-11-27 14:11:47.762227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63959 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63959 ']' 00:12:17.475 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63959 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63959 00:12:17.476 killing process with pid 63959 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63959' 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63959 00:12:17.476 [2024-11-27 14:11:47.803906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.476 14:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63959 00:12:17.734 [2024-11-27 14:11:48.090630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.682 ************************************ 00:12:18.682 END TEST raid_state_function_test 00:12:18.682 ************************************ 00:12:18.682 14:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:18.682 00:12:18.682 real 0m11.906s 00:12:18.682 user 0m19.693s 00:12:18.682 sys 0m1.636s 00:12:18.682 14:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.682 14:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.940 14:11:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:18.940 14:11:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:18.940 14:11:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.940 14:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.940 ************************************ 00:12:18.940 START TEST raid_state_function_test_sb 00:12:18.940 ************************************ 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64602 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64602' 00:12:18.940 Process raid pid: 64602 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64602 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64602 ']' 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.940 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.940 [2024-11-27 14:11:49.321033] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:18.940 [2024-11-27 14:11:49.321210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.198 [2024-11-27 14:11:49.513220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.198 [2024-11-27 14:11:49.694180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.456 [2024-11-27 14:11:49.902242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.456 [2024-11-27 14:11:49.902300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.023 [2024-11-27 14:11:50.331909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.023 [2024-11-27 14:11:50.331977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.023 [2024-11-27 14:11:50.331996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.023 [2024-11-27 14:11:50.332014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.023 [2024-11-27 14:11:50.332024] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.023 [2024-11-27 14:11:50.332038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.023 "name": "Existed_Raid", 00:12:20.023 "uuid": "b5194b9b-ca5f-4618-aac1-a17f37a18d24", 00:12:20.023 "strip_size_kb": 64, 00:12:20.023 "state": "configuring", 00:12:20.023 "raid_level": "raid0", 00:12:20.023 "superblock": true, 00:12:20.023 "num_base_bdevs": 3, 00:12:20.023 "num_base_bdevs_discovered": 0, 00:12:20.023 "num_base_bdevs_operational": 3, 00:12:20.023 "base_bdevs_list": [ 00:12:20.023 { 00:12:20.023 "name": "BaseBdev1", 00:12:20.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.023 "is_configured": false, 00:12:20.023 "data_offset": 0, 00:12:20.023 "data_size": 0 00:12:20.023 }, 00:12:20.023 { 00:12:20.023 "name": "BaseBdev2", 00:12:20.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.023 "is_configured": false, 00:12:20.023 "data_offset": 0, 00:12:20.023 "data_size": 0 00:12:20.023 }, 00:12:20.023 { 00:12:20.023 "name": "BaseBdev3", 00:12:20.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.023 "is_configured": false, 00:12:20.023 "data_offset": 0, 00:12:20.023 "data_size": 0 00:12:20.023 } 00:12:20.023 ] 00:12:20.023 }' 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.023 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.591 [2024-11-27 14:11:50.807987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.591 [2024-11-27 14:11:50.808038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.591 [2024-11-27 14:11:50.815968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.591 [2024-11-27 14:11:50.816023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.591 [2024-11-27 14:11:50.816039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.591 [2024-11-27 14:11:50.816055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.591 [2024-11-27 14:11:50.816064] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.591 [2024-11-27 14:11:50.816078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.591 [2024-11-27 14:11:50.860914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.591 BaseBdev1 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:20.591 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 [ 00:12:20.592 { 00:12:20.592 "name": "BaseBdev1", 00:12:20.592 "aliases": [ 00:12:20.592 "1f56910c-c417-4b19-9213-954e0b616124" 00:12:20.592 ], 00:12:20.592 "product_name": "Malloc disk", 00:12:20.592 "block_size": 512, 00:12:20.592 "num_blocks": 65536, 00:12:20.592 "uuid": "1f56910c-c417-4b19-9213-954e0b616124", 00:12:20.592 "assigned_rate_limits": { 00:12:20.592 "rw_ios_per_sec": 0, 00:12:20.592 "rw_mbytes_per_sec": 0, 00:12:20.592 "r_mbytes_per_sec": 0, 00:12:20.592 "w_mbytes_per_sec": 0 00:12:20.592 }, 00:12:20.592 "claimed": true, 00:12:20.592 "claim_type": "exclusive_write", 00:12:20.592 "zoned": false, 00:12:20.592 "supported_io_types": { 00:12:20.592 "read": true, 00:12:20.592 "write": true, 00:12:20.592 "unmap": true, 00:12:20.592 "flush": true, 00:12:20.592 "reset": true, 00:12:20.592 "nvme_admin": false, 00:12:20.592 "nvme_io": false, 00:12:20.592 "nvme_io_md": false, 00:12:20.592 "write_zeroes": true, 00:12:20.592 "zcopy": true, 00:12:20.592 "get_zone_info": false, 00:12:20.592 "zone_management": false, 00:12:20.592 "zone_append": false, 00:12:20.592 "compare": false, 00:12:20.592 "compare_and_write": false, 00:12:20.592 "abort": true, 00:12:20.592 "seek_hole": false, 00:12:20.592 "seek_data": false, 00:12:20.592 "copy": true, 00:12:20.592 "nvme_iov_md": false 00:12:20.592 }, 00:12:20.592 "memory_domains": [ 00:12:20.592 { 00:12:20.592 "dma_device_id": "system", 00:12:20.592 "dma_device_type": 1 00:12:20.592 }, 00:12:20.592 { 00:12:20.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.592 "dma_device_type": 2 00:12:20.592 } 00:12:20.592 ], 00:12:20.592 "driver_specific": {} 00:12:20.592 } 00:12:20.592 ] 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.592 "name": "Existed_Raid", 00:12:20.592 "uuid": "c7634436-a7e4-4eb0-9029-e7c750a8621f", 00:12:20.592 "strip_size_kb": 64, 00:12:20.592 "state": "configuring", 00:12:20.592 "raid_level": "raid0", 00:12:20.592 "superblock": true, 00:12:20.592 "num_base_bdevs": 3, 00:12:20.592 "num_base_bdevs_discovered": 1, 00:12:20.592 "num_base_bdevs_operational": 3, 00:12:20.592 "base_bdevs_list": [ 00:12:20.592 { 00:12:20.592 "name": "BaseBdev1", 00:12:20.592 "uuid": "1f56910c-c417-4b19-9213-954e0b616124", 00:12:20.592 "is_configured": true, 00:12:20.592 "data_offset": 2048, 00:12:20.592 "data_size": 63488 00:12:20.592 }, 00:12:20.592 { 00:12:20.592 "name": "BaseBdev2", 00:12:20.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.592 "is_configured": false, 00:12:20.592 "data_offset": 0, 00:12:20.592 "data_size": 0 00:12:20.592 }, 00:12:20.592 { 00:12:20.592 "name": "BaseBdev3", 00:12:20.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.592 "is_configured": false, 00:12:20.592 "data_offset": 0, 00:12:20.592 "data_size": 0 00:12:20.592 } 00:12:20.592 ] 00:12:20.592 }' 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.592 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.161 [2024-11-27 14:11:51.417116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.161 [2024-11-27 14:11:51.417185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.161 [2024-11-27 14:11:51.425170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.161 [2024-11-27 14:11:51.427625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.161 [2024-11-27 14:11:51.427678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.161 [2024-11-27 14:11:51.427698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.161 [2024-11-27 14:11:51.427714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.161 "name": "Existed_Raid", 00:12:21.161 "uuid": "94b7096c-9941-4347-82fe-c353afd40224", 00:12:21.161 "strip_size_kb": 64, 00:12:21.161 "state": "configuring", 00:12:21.161 "raid_level": "raid0", 00:12:21.161 "superblock": true, 00:12:21.161 "num_base_bdevs": 3, 00:12:21.161 "num_base_bdevs_discovered": 1, 00:12:21.161 "num_base_bdevs_operational": 3, 00:12:21.161 "base_bdevs_list": [ 00:12:21.161 { 00:12:21.161 "name": "BaseBdev1", 00:12:21.161 "uuid": "1f56910c-c417-4b19-9213-954e0b616124", 00:12:21.161 "is_configured": true, 00:12:21.161 "data_offset": 2048, 00:12:21.161 "data_size": 63488 00:12:21.161 }, 00:12:21.161 { 00:12:21.161 "name": "BaseBdev2", 00:12:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.161 "is_configured": false, 00:12:21.161 "data_offset": 0, 00:12:21.161 "data_size": 0 00:12:21.161 }, 00:12:21.161 { 00:12:21.161 "name": "BaseBdev3", 00:12:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.161 "is_configured": false, 00:12:21.161 "data_offset": 0, 00:12:21.161 "data_size": 0 00:12:21.161 } 00:12:21.161 ] 00:12:21.161 }' 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.161 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.728 [2024-11-27 14:11:51.983625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.728 BaseBdev2 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.728 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.728 [ 00:12:21.728 { 00:12:21.728 "name": "BaseBdev2", 00:12:21.728 "aliases": [ 00:12:21.728 "0f9c3a27-fea9-43d2-b283-f397088cb3c0" 00:12:21.728 ], 00:12:21.728 "product_name": "Malloc disk", 00:12:21.728 "block_size": 512, 00:12:21.728 "num_blocks": 65536, 00:12:21.728 "uuid": "0f9c3a27-fea9-43d2-b283-f397088cb3c0", 00:12:21.728 "assigned_rate_limits": { 00:12:21.728 "rw_ios_per_sec": 0, 00:12:21.728 "rw_mbytes_per_sec": 0, 00:12:21.728 "r_mbytes_per_sec": 0, 00:12:21.728 "w_mbytes_per_sec": 0 00:12:21.728 }, 00:12:21.728 "claimed": true, 00:12:21.728 "claim_type": "exclusive_write", 00:12:21.728 "zoned": false, 00:12:21.728 "supported_io_types": { 00:12:21.728 "read": true, 00:12:21.728 "write": true, 00:12:21.728 "unmap": true, 00:12:21.728 "flush": true, 00:12:21.728 "reset": true, 00:12:21.728 "nvme_admin": false, 00:12:21.728 "nvme_io": false, 00:12:21.728 "nvme_io_md": false, 00:12:21.728 "write_zeroes": true, 00:12:21.728 "zcopy": true, 00:12:21.728 "get_zone_info": false, 00:12:21.728 "zone_management": false, 00:12:21.728 "zone_append": false, 00:12:21.728 "compare": false, 00:12:21.728 "compare_and_write": false, 00:12:21.728 "abort": true, 00:12:21.728 "seek_hole": false, 00:12:21.728 "seek_data": false, 00:12:21.728 "copy": true, 00:12:21.728 "nvme_iov_md": false 00:12:21.728 }, 00:12:21.728 "memory_domains": [ 00:12:21.728 { 00:12:21.728 "dma_device_id": "system", 00:12:21.728 "dma_device_type": 1 00:12:21.728 }, 00:12:21.728 { 00:12:21.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.728 "dma_device_type": 2 00:12:21.728 } 00:12:21.728 ], 00:12:21.728 "driver_specific": {} 00:12:21.728 } 00:12:21.728 ] 00:12:21.728 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.728 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.729 "name": "Existed_Raid", 00:12:21.729 "uuid": "94b7096c-9941-4347-82fe-c353afd40224", 00:12:21.729 "strip_size_kb": 64, 00:12:21.729 "state": "configuring", 00:12:21.729 "raid_level": "raid0", 00:12:21.729 "superblock": true, 00:12:21.729 "num_base_bdevs": 3, 00:12:21.729 "num_base_bdevs_discovered": 2, 00:12:21.729 "num_base_bdevs_operational": 3, 00:12:21.729 "base_bdevs_list": [ 00:12:21.729 { 00:12:21.729 "name": "BaseBdev1", 00:12:21.729 "uuid": "1f56910c-c417-4b19-9213-954e0b616124", 00:12:21.729 "is_configured": true, 00:12:21.729 "data_offset": 2048, 00:12:21.729 "data_size": 63488 00:12:21.729 }, 00:12:21.729 { 00:12:21.729 "name": "BaseBdev2", 00:12:21.729 "uuid": "0f9c3a27-fea9-43d2-b283-f397088cb3c0", 00:12:21.729 "is_configured": true, 00:12:21.729 "data_offset": 2048, 00:12:21.729 "data_size": 63488 00:12:21.729 }, 00:12:21.729 { 00:12:21.729 "name": "BaseBdev3", 00:12:21.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.729 "is_configured": false, 00:12:21.729 "data_offset": 0, 00:12:21.729 "data_size": 0 00:12:21.729 } 00:12:21.729 ] 00:12:21.729 }' 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.729 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 [2024-11-27 14:11:52.574787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.295 [2024-11-27 14:11:52.575131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.295 [2024-11-27 14:11:52.575163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:22.295 [2024-11-27 14:11:52.575503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:22.295 BaseBdev3 00:12:22.295 [2024-11-27 14:11:52.575732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.295 [2024-11-27 14:11:52.575758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:22.295 [2024-11-27 14:11:52.575971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.295 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 [ 00:12:22.295 { 00:12:22.295 "name": "BaseBdev3", 00:12:22.295 "aliases": [ 00:12:22.296 "1ece6a07-1dd4-4d3c-8f1b-eaef46c09d28" 00:12:22.296 ], 00:12:22.296 "product_name": "Malloc disk", 00:12:22.296 "block_size": 512, 00:12:22.296 "num_blocks": 65536, 00:12:22.296 "uuid": "1ece6a07-1dd4-4d3c-8f1b-eaef46c09d28", 00:12:22.296 "assigned_rate_limits": { 00:12:22.296 "rw_ios_per_sec": 0, 00:12:22.296 "rw_mbytes_per_sec": 0, 00:12:22.296 "r_mbytes_per_sec": 0, 00:12:22.296 "w_mbytes_per_sec": 0 00:12:22.296 }, 00:12:22.296 "claimed": true, 00:12:22.296 "claim_type": "exclusive_write", 00:12:22.296 "zoned": false, 00:12:22.296 "supported_io_types": { 00:12:22.296 "read": true, 00:12:22.296 "write": true, 00:12:22.296 "unmap": true, 00:12:22.296 "flush": true, 00:12:22.296 "reset": true, 00:12:22.296 "nvme_admin": false, 00:12:22.296 "nvme_io": false, 00:12:22.296 "nvme_io_md": false, 00:12:22.296 "write_zeroes": true, 00:12:22.296 "zcopy": true, 00:12:22.296 "get_zone_info": false, 00:12:22.296 "zone_management": false, 00:12:22.296 "zone_append": false, 00:12:22.296 "compare": false, 00:12:22.296 "compare_and_write": false, 00:12:22.296 "abort": true, 00:12:22.296 "seek_hole": false, 00:12:22.296 "seek_data": false, 00:12:22.296 "copy": true, 00:12:22.296 "nvme_iov_md": false 00:12:22.296 }, 00:12:22.296 "memory_domains": [ 00:12:22.296 { 00:12:22.296 "dma_device_id": "system", 00:12:22.296 "dma_device_type": 1 00:12:22.296 }, 00:12:22.296 { 00:12:22.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.296 "dma_device_type": 2 00:12:22.296 } 00:12:22.296 ], 00:12:22.296 "driver_specific": {} 00:12:22.296 } 00:12:22.296 ] 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.296 "name": "Existed_Raid", 00:12:22.296 "uuid": "94b7096c-9941-4347-82fe-c353afd40224", 00:12:22.296 "strip_size_kb": 64, 00:12:22.296 "state": "online", 00:12:22.296 "raid_level": "raid0", 00:12:22.296 "superblock": true, 00:12:22.296 "num_base_bdevs": 3, 00:12:22.296 "num_base_bdevs_discovered": 3, 00:12:22.296 "num_base_bdevs_operational": 3, 00:12:22.296 "base_bdevs_list": [ 00:12:22.296 { 00:12:22.296 "name": "BaseBdev1", 00:12:22.296 "uuid": "1f56910c-c417-4b19-9213-954e0b616124", 00:12:22.296 "is_configured": true, 00:12:22.296 "data_offset": 2048, 00:12:22.296 "data_size": 63488 00:12:22.296 }, 00:12:22.296 { 00:12:22.296 "name": "BaseBdev2", 00:12:22.296 "uuid": "0f9c3a27-fea9-43d2-b283-f397088cb3c0", 00:12:22.296 "is_configured": true, 00:12:22.296 "data_offset": 2048, 00:12:22.296 "data_size": 63488 00:12:22.296 }, 00:12:22.296 { 00:12:22.296 "name": "BaseBdev3", 00:12:22.296 "uuid": "1ece6a07-1dd4-4d3c-8f1b-eaef46c09d28", 00:12:22.296 "is_configured": true, 00:12:22.296 "data_offset": 2048, 00:12:22.296 "data_size": 63488 00:12:22.296 } 00:12:22.296 ] 00:12:22.296 }' 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.296 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.862 [2024-11-27 14:11:53.111398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.862 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.862 "name": "Existed_Raid", 00:12:22.862 "aliases": [ 00:12:22.862 "94b7096c-9941-4347-82fe-c353afd40224" 00:12:22.862 ], 00:12:22.862 "product_name": "Raid Volume", 00:12:22.862 "block_size": 512, 00:12:22.862 "num_blocks": 190464, 00:12:22.862 "uuid": "94b7096c-9941-4347-82fe-c353afd40224", 00:12:22.862 "assigned_rate_limits": { 00:12:22.862 "rw_ios_per_sec": 0, 00:12:22.862 "rw_mbytes_per_sec": 0, 00:12:22.862 "r_mbytes_per_sec": 0, 00:12:22.862 "w_mbytes_per_sec": 0 00:12:22.862 }, 00:12:22.862 "claimed": false, 00:12:22.862 "zoned": false, 00:12:22.862 "supported_io_types": { 00:12:22.862 "read": true, 00:12:22.862 "write": true, 00:12:22.862 "unmap": true, 00:12:22.862 "flush": true, 00:12:22.862 "reset": true, 00:12:22.862 "nvme_admin": false, 00:12:22.862 "nvme_io": false, 00:12:22.862 "nvme_io_md": false, 00:12:22.862 "write_zeroes": true, 00:12:22.862 "zcopy": false, 00:12:22.862 "get_zone_info": false, 00:12:22.862 "zone_management": false, 00:12:22.862 "zone_append": false, 00:12:22.862 "compare": false, 00:12:22.862 "compare_and_write": false, 00:12:22.862 "abort": false, 00:12:22.862 "seek_hole": false, 00:12:22.862 "seek_data": false, 00:12:22.862 "copy": false, 00:12:22.862 "nvme_iov_md": false 00:12:22.862 }, 00:12:22.862 "memory_domains": [ 00:12:22.862 { 00:12:22.862 "dma_device_id": "system", 00:12:22.862 "dma_device_type": 1 00:12:22.862 }, 00:12:22.862 { 00:12:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.862 "dma_device_type": 2 00:12:22.862 }, 00:12:22.862 { 00:12:22.862 "dma_device_id": "system", 00:12:22.862 "dma_device_type": 1 00:12:22.862 }, 00:12:22.862 { 00:12:22.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.862 "dma_device_type": 2 00:12:22.862 }, 00:12:22.862 { 00:12:22.862 "dma_device_id": "system", 00:12:22.863 "dma_device_type": 1 00:12:22.863 }, 00:12:22.863 { 00:12:22.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.863 "dma_device_type": 2 00:12:22.863 } 00:12:22.863 ], 00:12:22.863 "driver_specific": { 00:12:22.863 "raid": { 00:12:22.863 "uuid": "94b7096c-9941-4347-82fe-c353afd40224", 00:12:22.863 "strip_size_kb": 64, 00:12:22.863 "state": "online", 00:12:22.863 "raid_level": "raid0", 00:12:22.863 "superblock": true, 00:12:22.863 "num_base_bdevs": 3, 00:12:22.863 "num_base_bdevs_discovered": 3, 00:12:22.863 "num_base_bdevs_operational": 3, 00:12:22.863 "base_bdevs_list": [ 00:12:22.863 { 00:12:22.863 "name": "BaseBdev1", 00:12:22.863 "uuid": "1f56910c-c417-4b19-9213-954e0b616124", 00:12:22.863 "is_configured": true, 00:12:22.863 "data_offset": 2048, 00:12:22.863 "data_size": 63488 00:12:22.863 }, 00:12:22.863 { 00:12:22.863 "name": "BaseBdev2", 00:12:22.863 "uuid": "0f9c3a27-fea9-43d2-b283-f397088cb3c0", 00:12:22.863 "is_configured": true, 00:12:22.863 "data_offset": 2048, 00:12:22.863 "data_size": 63488 00:12:22.863 }, 00:12:22.863 { 00:12:22.863 "name": "BaseBdev3", 00:12:22.863 "uuid": "1ece6a07-1dd4-4d3c-8f1b-eaef46c09d28", 00:12:22.863 "is_configured": true, 00:12:22.863 "data_offset": 2048, 00:12:22.863 "data_size": 63488 00:12:22.863 } 00:12:22.863 ] 00:12:22.863 } 00:12:22.863 } 00:12:22.863 }' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:22.863 BaseBdev2 00:12:22.863 BaseBdev3' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.863 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.121 [2024-11-27 14:11:53.407095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.121 [2024-11-27 14:11:53.407134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.121 [2024-11-27 14:11:53.407206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:23.121 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.122 "name": "Existed_Raid", 00:12:23.122 "uuid": "94b7096c-9941-4347-82fe-c353afd40224", 00:12:23.122 "strip_size_kb": 64, 00:12:23.122 "state": "offline", 00:12:23.122 "raid_level": "raid0", 00:12:23.122 "superblock": true, 00:12:23.122 "num_base_bdevs": 3, 00:12:23.122 "num_base_bdevs_discovered": 2, 00:12:23.122 "num_base_bdevs_operational": 2, 00:12:23.122 "base_bdevs_list": [ 00:12:23.122 { 00:12:23.122 "name": null, 00:12:23.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.122 "is_configured": false, 00:12:23.122 "data_offset": 0, 00:12:23.122 "data_size": 63488 00:12:23.122 }, 00:12:23.122 { 00:12:23.122 "name": "BaseBdev2", 00:12:23.122 "uuid": "0f9c3a27-fea9-43d2-b283-f397088cb3c0", 00:12:23.122 "is_configured": true, 00:12:23.122 "data_offset": 2048, 00:12:23.122 "data_size": 63488 00:12:23.122 }, 00:12:23.122 { 00:12:23.122 "name": "BaseBdev3", 00:12:23.122 "uuid": "1ece6a07-1dd4-4d3c-8f1b-eaef46c09d28", 00:12:23.122 "is_configured": true, 00:12:23.122 "data_offset": 2048, 00:12:23.122 "data_size": 63488 00:12:23.122 } 00:12:23.122 ] 00:12:23.122 }' 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.122 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 [2024-11-27 14:11:54.061255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 [2024-11-27 14:11:54.206997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.948 [2024-11-27 14:11:54.207079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 BaseBdev2 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.948 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 [ 00:12:23.948 { 00:12:23.948 "name": "BaseBdev2", 00:12:23.949 "aliases": [ 00:12:23.949 "139acd8d-8a04-4773-abfc-50a440a1c3a5" 00:12:23.949 ], 00:12:23.949 "product_name": "Malloc disk", 00:12:23.949 "block_size": 512, 00:12:23.949 "num_blocks": 65536, 00:12:23.949 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:23.949 "assigned_rate_limits": { 00:12:23.949 "rw_ios_per_sec": 0, 00:12:23.949 "rw_mbytes_per_sec": 0, 00:12:23.949 "r_mbytes_per_sec": 0, 00:12:23.949 "w_mbytes_per_sec": 0 00:12:23.949 }, 00:12:23.949 "claimed": false, 00:12:23.949 "zoned": false, 00:12:23.949 "supported_io_types": { 00:12:23.949 "read": true, 00:12:23.949 "write": true, 00:12:23.949 "unmap": true, 00:12:23.949 "flush": true, 00:12:23.949 "reset": true, 00:12:23.949 "nvme_admin": false, 00:12:23.949 "nvme_io": false, 00:12:23.949 "nvme_io_md": false, 00:12:23.949 "write_zeroes": true, 00:12:23.949 "zcopy": true, 00:12:23.949 "get_zone_info": false, 00:12:23.949 "zone_management": false, 00:12:23.949 "zone_append": false, 00:12:23.949 "compare": false, 00:12:23.949 "compare_and_write": false, 00:12:23.949 "abort": true, 00:12:23.949 "seek_hole": false, 00:12:23.949 "seek_data": false, 00:12:23.949 "copy": true, 00:12:23.949 "nvme_iov_md": false 00:12:23.949 }, 00:12:23.949 "memory_domains": [ 00:12:23.949 { 00:12:23.949 "dma_device_id": "system", 00:12:23.949 "dma_device_type": 1 00:12:23.949 }, 00:12:23.949 { 00:12:23.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.949 "dma_device_type": 2 00:12:23.949 } 00:12:23.949 ], 00:12:23.949 "driver_specific": {} 00:12:23.949 } 00:12:23.949 ] 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.949 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.208 BaseBdev3 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.208 [ 00:12:24.208 { 00:12:24.208 "name": "BaseBdev3", 00:12:24.208 "aliases": [ 00:12:24.208 "9bf43660-f42d-4fe2-abf0-3d62a0b50763" 00:12:24.208 ], 00:12:24.208 "product_name": "Malloc disk", 00:12:24.208 "block_size": 512, 00:12:24.208 "num_blocks": 65536, 00:12:24.208 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:24.208 "assigned_rate_limits": { 00:12:24.208 "rw_ios_per_sec": 0, 00:12:24.208 "rw_mbytes_per_sec": 0, 00:12:24.208 "r_mbytes_per_sec": 0, 00:12:24.208 "w_mbytes_per_sec": 0 00:12:24.208 }, 00:12:24.208 "claimed": false, 00:12:24.208 "zoned": false, 00:12:24.208 "supported_io_types": { 00:12:24.208 "read": true, 00:12:24.208 "write": true, 00:12:24.208 "unmap": true, 00:12:24.208 "flush": true, 00:12:24.208 "reset": true, 00:12:24.208 "nvme_admin": false, 00:12:24.208 "nvme_io": false, 00:12:24.208 "nvme_io_md": false, 00:12:24.208 "write_zeroes": true, 00:12:24.208 "zcopy": true, 00:12:24.208 "get_zone_info": false, 00:12:24.208 "zone_management": false, 00:12:24.208 "zone_append": false, 00:12:24.208 "compare": false, 00:12:24.208 "compare_and_write": false, 00:12:24.208 "abort": true, 00:12:24.208 "seek_hole": false, 00:12:24.208 "seek_data": false, 00:12:24.208 "copy": true, 00:12:24.208 "nvme_iov_md": false 00:12:24.208 }, 00:12:24.208 "memory_domains": [ 00:12:24.208 { 00:12:24.208 "dma_device_id": "system", 00:12:24.208 "dma_device_type": 1 00:12:24.208 }, 00:12:24.208 { 00:12:24.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.208 "dma_device_type": 2 00:12:24.208 } 00:12:24.208 ], 00:12:24.208 "driver_specific": {} 00:12:24.208 } 00:12:24.208 ] 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.208 [2024-11-27 14:11:54.505919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.208 [2024-11-27 14:11:54.505980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.208 [2024-11-27 14:11:54.506015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.208 [2024-11-27 14:11:54.508449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.208 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.209 "name": "Existed_Raid", 00:12:24.209 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:24.209 "strip_size_kb": 64, 00:12:24.209 "state": "configuring", 00:12:24.209 "raid_level": "raid0", 00:12:24.209 "superblock": true, 00:12:24.209 "num_base_bdevs": 3, 00:12:24.209 "num_base_bdevs_discovered": 2, 00:12:24.209 "num_base_bdevs_operational": 3, 00:12:24.209 "base_bdevs_list": [ 00:12:24.209 { 00:12:24.209 "name": "BaseBdev1", 00:12:24.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.209 "is_configured": false, 00:12:24.209 "data_offset": 0, 00:12:24.209 "data_size": 0 00:12:24.209 }, 00:12:24.209 { 00:12:24.209 "name": "BaseBdev2", 00:12:24.209 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:24.209 "is_configured": true, 00:12:24.209 "data_offset": 2048, 00:12:24.209 "data_size": 63488 00:12:24.209 }, 00:12:24.209 { 00:12:24.209 "name": "BaseBdev3", 00:12:24.209 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:24.209 "is_configured": true, 00:12:24.209 "data_offset": 2048, 00:12:24.209 "data_size": 63488 00:12:24.209 } 00:12:24.209 ] 00:12:24.209 }' 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.209 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.776 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.776 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.776 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.777 [2024-11-27 14:11:55.038083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.777 "name": "Existed_Raid", 00:12:24.777 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:24.777 "strip_size_kb": 64, 00:12:24.777 "state": "configuring", 00:12:24.777 "raid_level": "raid0", 00:12:24.777 "superblock": true, 00:12:24.777 "num_base_bdevs": 3, 00:12:24.777 "num_base_bdevs_discovered": 1, 00:12:24.777 "num_base_bdevs_operational": 3, 00:12:24.777 "base_bdevs_list": [ 00:12:24.777 { 00:12:24.777 "name": "BaseBdev1", 00:12:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.777 "is_configured": false, 00:12:24.777 "data_offset": 0, 00:12:24.777 "data_size": 0 00:12:24.777 }, 00:12:24.777 { 00:12:24.777 "name": null, 00:12:24.777 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:24.777 "is_configured": false, 00:12:24.777 "data_offset": 0, 00:12:24.777 "data_size": 63488 00:12:24.777 }, 00:12:24.777 { 00:12:24.777 "name": "BaseBdev3", 00:12:24.777 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:24.777 "is_configured": true, 00:12:24.777 "data_offset": 2048, 00:12:24.777 "data_size": 63488 00:12:24.777 } 00:12:24.777 ] 00:12:24.777 }' 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.777 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.345 [2024-11-27 14:11:55.687440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.345 BaseBdev1 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.345 [ 00:12:25.345 { 00:12:25.345 "name": "BaseBdev1", 00:12:25.345 "aliases": [ 00:12:25.345 "34d5e832-6a53-4f2f-aec3-77b79131e08d" 00:12:25.345 ], 00:12:25.345 "product_name": "Malloc disk", 00:12:25.345 "block_size": 512, 00:12:25.345 "num_blocks": 65536, 00:12:25.345 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:25.345 "assigned_rate_limits": { 00:12:25.345 "rw_ios_per_sec": 0, 00:12:25.345 "rw_mbytes_per_sec": 0, 00:12:25.345 "r_mbytes_per_sec": 0, 00:12:25.345 "w_mbytes_per_sec": 0 00:12:25.345 }, 00:12:25.345 "claimed": true, 00:12:25.345 "claim_type": "exclusive_write", 00:12:25.345 "zoned": false, 00:12:25.345 "supported_io_types": { 00:12:25.345 "read": true, 00:12:25.345 "write": true, 00:12:25.345 "unmap": true, 00:12:25.345 "flush": true, 00:12:25.345 "reset": true, 00:12:25.345 "nvme_admin": false, 00:12:25.345 "nvme_io": false, 00:12:25.345 "nvme_io_md": false, 00:12:25.345 "write_zeroes": true, 00:12:25.345 "zcopy": true, 00:12:25.345 "get_zone_info": false, 00:12:25.345 "zone_management": false, 00:12:25.345 "zone_append": false, 00:12:25.345 "compare": false, 00:12:25.345 "compare_and_write": false, 00:12:25.345 "abort": true, 00:12:25.345 "seek_hole": false, 00:12:25.345 "seek_data": false, 00:12:25.345 "copy": true, 00:12:25.345 "nvme_iov_md": false 00:12:25.345 }, 00:12:25.345 "memory_domains": [ 00:12:25.345 { 00:12:25.345 "dma_device_id": "system", 00:12:25.345 "dma_device_type": 1 00:12:25.345 }, 00:12:25.345 { 00:12:25.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.345 "dma_device_type": 2 00:12:25.345 } 00:12:25.345 ], 00:12:25.345 "driver_specific": {} 00:12:25.345 } 00:12:25.345 ] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.345 "name": "Existed_Raid", 00:12:25.345 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:25.345 "strip_size_kb": 64, 00:12:25.345 "state": "configuring", 00:12:25.345 "raid_level": "raid0", 00:12:25.345 "superblock": true, 00:12:25.345 "num_base_bdevs": 3, 00:12:25.345 "num_base_bdevs_discovered": 2, 00:12:25.345 "num_base_bdevs_operational": 3, 00:12:25.345 "base_bdevs_list": [ 00:12:25.345 { 00:12:25.345 "name": "BaseBdev1", 00:12:25.345 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:25.345 "is_configured": true, 00:12:25.345 "data_offset": 2048, 00:12:25.345 "data_size": 63488 00:12:25.345 }, 00:12:25.345 { 00:12:25.345 "name": null, 00:12:25.345 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:25.345 "is_configured": false, 00:12:25.345 "data_offset": 0, 00:12:25.345 "data_size": 63488 00:12:25.345 }, 00:12:25.345 { 00:12:25.345 "name": "BaseBdev3", 00:12:25.345 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:25.345 "is_configured": true, 00:12:25.345 "data_offset": 2048, 00:12:25.345 "data_size": 63488 00:12:25.345 } 00:12:25.345 ] 00:12:25.345 }' 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.345 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 [2024-11-27 14:11:56.307750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.913 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.914 "name": "Existed_Raid", 00:12:25.914 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:25.914 "strip_size_kb": 64, 00:12:25.914 "state": "configuring", 00:12:25.914 "raid_level": "raid0", 00:12:25.914 "superblock": true, 00:12:25.914 "num_base_bdevs": 3, 00:12:25.914 "num_base_bdevs_discovered": 1, 00:12:25.914 "num_base_bdevs_operational": 3, 00:12:25.914 "base_bdevs_list": [ 00:12:25.914 { 00:12:25.914 "name": "BaseBdev1", 00:12:25.914 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:25.914 "is_configured": true, 00:12:25.914 "data_offset": 2048, 00:12:25.914 "data_size": 63488 00:12:25.914 }, 00:12:25.914 { 00:12:25.914 "name": null, 00:12:25.914 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:25.914 "is_configured": false, 00:12:25.914 "data_offset": 0, 00:12:25.914 "data_size": 63488 00:12:25.914 }, 00:12:25.914 { 00:12:25.914 "name": null, 00:12:25.914 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:25.914 "is_configured": false, 00:12:25.914 "data_offset": 0, 00:12:25.914 "data_size": 63488 00:12:25.914 } 00:12:25.914 ] 00:12:25.914 }' 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.914 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.483 [2024-11-27 14:11:56.939975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.483 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.741 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.741 "name": "Existed_Raid", 00:12:26.741 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:26.741 "strip_size_kb": 64, 00:12:26.741 "state": "configuring", 00:12:26.741 "raid_level": "raid0", 00:12:26.741 "superblock": true, 00:12:26.741 "num_base_bdevs": 3, 00:12:26.741 "num_base_bdevs_discovered": 2, 00:12:26.741 "num_base_bdevs_operational": 3, 00:12:26.741 "base_bdevs_list": [ 00:12:26.741 { 00:12:26.741 "name": "BaseBdev1", 00:12:26.741 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:26.741 "is_configured": true, 00:12:26.741 "data_offset": 2048, 00:12:26.741 "data_size": 63488 00:12:26.741 }, 00:12:26.741 { 00:12:26.741 "name": null, 00:12:26.741 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:26.741 "is_configured": false, 00:12:26.741 "data_offset": 0, 00:12:26.741 "data_size": 63488 00:12:26.741 }, 00:12:26.741 { 00:12:26.741 "name": "BaseBdev3", 00:12:26.741 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:26.741 "is_configured": true, 00:12:26.741 "data_offset": 2048, 00:12:26.741 "data_size": 63488 00:12:26.741 } 00:12:26.741 ] 00:12:26.741 }' 00:12:26.741 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.741 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.000 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.000 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:27.000 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.000 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.000 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.258 [2024-11-27 14:11:57.536213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.258 "name": "Existed_Raid", 00:12:27.258 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:27.258 "strip_size_kb": 64, 00:12:27.258 "state": "configuring", 00:12:27.258 "raid_level": "raid0", 00:12:27.258 "superblock": true, 00:12:27.258 "num_base_bdevs": 3, 00:12:27.258 "num_base_bdevs_discovered": 1, 00:12:27.258 "num_base_bdevs_operational": 3, 00:12:27.258 "base_bdevs_list": [ 00:12:27.258 { 00:12:27.258 "name": null, 00:12:27.258 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:27.258 "is_configured": false, 00:12:27.258 "data_offset": 0, 00:12:27.258 "data_size": 63488 00:12:27.258 }, 00:12:27.258 { 00:12:27.258 "name": null, 00:12:27.258 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:27.258 "is_configured": false, 00:12:27.258 "data_offset": 0, 00:12:27.258 "data_size": 63488 00:12:27.258 }, 00:12:27.258 { 00:12:27.258 "name": "BaseBdev3", 00:12:27.258 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:27.258 "is_configured": true, 00:12:27.258 "data_offset": 2048, 00:12:27.258 "data_size": 63488 00:12:27.258 } 00:12:27.258 ] 00:12:27.258 }' 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.258 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.826 [2024-11-27 14:11:58.209580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.826 "name": "Existed_Raid", 00:12:27.826 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:27.826 "strip_size_kb": 64, 00:12:27.826 "state": "configuring", 00:12:27.826 "raid_level": "raid0", 00:12:27.826 "superblock": true, 00:12:27.826 "num_base_bdevs": 3, 00:12:27.826 "num_base_bdevs_discovered": 2, 00:12:27.826 "num_base_bdevs_operational": 3, 00:12:27.826 "base_bdevs_list": [ 00:12:27.826 { 00:12:27.826 "name": null, 00:12:27.826 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:27.826 "is_configured": false, 00:12:27.826 "data_offset": 0, 00:12:27.826 "data_size": 63488 00:12:27.826 }, 00:12:27.826 { 00:12:27.826 "name": "BaseBdev2", 00:12:27.826 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:27.826 "is_configured": true, 00:12:27.826 "data_offset": 2048, 00:12:27.826 "data_size": 63488 00:12:27.826 }, 00:12:27.826 { 00:12:27.826 "name": "BaseBdev3", 00:12:27.826 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:27.826 "is_configured": true, 00:12:27.826 "data_offset": 2048, 00:12:27.826 "data_size": 63488 00:12:27.826 } 00:12:27.826 ] 00:12:27.826 }' 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.826 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 34d5e832-6a53-4f2f-aec3-77b79131e08d 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.394 [2024-11-27 14:11:58.885149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:28.394 [2024-11-27 14:11:58.885470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:28.394 [2024-11-27 14:11:58.885509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:28.394 [2024-11-27 14:11:58.885835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:28.394 NewBaseBdev 00:12:28.394 [2024-11-27 14:11:58.886070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:28.394 [2024-11-27 14:11:58.886088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:28.394 [2024-11-27 14:11:58.886257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.394 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.652 [ 00:12:28.652 { 00:12:28.652 "name": "NewBaseBdev", 00:12:28.652 "aliases": [ 00:12:28.652 "34d5e832-6a53-4f2f-aec3-77b79131e08d" 00:12:28.652 ], 00:12:28.652 "product_name": "Malloc disk", 00:12:28.652 "block_size": 512, 00:12:28.652 "num_blocks": 65536, 00:12:28.652 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:28.652 "assigned_rate_limits": { 00:12:28.652 "rw_ios_per_sec": 0, 00:12:28.652 "rw_mbytes_per_sec": 0, 00:12:28.652 "r_mbytes_per_sec": 0, 00:12:28.652 "w_mbytes_per_sec": 0 00:12:28.652 }, 00:12:28.652 "claimed": true, 00:12:28.652 "claim_type": "exclusive_write", 00:12:28.652 "zoned": false, 00:12:28.652 "supported_io_types": { 00:12:28.652 "read": true, 00:12:28.652 "write": true, 00:12:28.652 "unmap": true, 00:12:28.652 "flush": true, 00:12:28.653 "reset": true, 00:12:28.653 "nvme_admin": false, 00:12:28.653 "nvme_io": false, 00:12:28.653 "nvme_io_md": false, 00:12:28.653 "write_zeroes": true, 00:12:28.653 "zcopy": true, 00:12:28.653 "get_zone_info": false, 00:12:28.653 "zone_management": false, 00:12:28.653 "zone_append": false, 00:12:28.653 "compare": false, 00:12:28.653 "compare_and_write": false, 00:12:28.653 "abort": true, 00:12:28.653 "seek_hole": false, 00:12:28.653 "seek_data": false, 00:12:28.653 "copy": true, 00:12:28.653 "nvme_iov_md": false 00:12:28.653 }, 00:12:28.653 "memory_domains": [ 00:12:28.653 { 00:12:28.653 "dma_device_id": "system", 00:12:28.653 "dma_device_type": 1 00:12:28.653 }, 00:12:28.653 { 00:12:28.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.653 "dma_device_type": 2 00:12:28.653 } 00:12:28.653 ], 00:12:28.653 "driver_specific": {} 00:12:28.653 } 00:12:28.653 ] 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.653 "name": "Existed_Raid", 00:12:28.653 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:28.653 "strip_size_kb": 64, 00:12:28.653 "state": "online", 00:12:28.653 "raid_level": "raid0", 00:12:28.653 "superblock": true, 00:12:28.653 "num_base_bdevs": 3, 00:12:28.653 "num_base_bdevs_discovered": 3, 00:12:28.653 "num_base_bdevs_operational": 3, 00:12:28.653 "base_bdevs_list": [ 00:12:28.653 { 00:12:28.653 "name": "NewBaseBdev", 00:12:28.653 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:28.653 "is_configured": true, 00:12:28.653 "data_offset": 2048, 00:12:28.653 "data_size": 63488 00:12:28.653 }, 00:12:28.653 { 00:12:28.653 "name": "BaseBdev2", 00:12:28.653 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:28.653 "is_configured": true, 00:12:28.653 "data_offset": 2048, 00:12:28.653 "data_size": 63488 00:12:28.653 }, 00:12:28.653 { 00:12:28.653 "name": "BaseBdev3", 00:12:28.653 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:28.653 "is_configured": true, 00:12:28.653 "data_offset": 2048, 00:12:28.653 "data_size": 63488 00:12:28.653 } 00:12:28.653 ] 00:12:28.653 }' 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.653 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.216 [2024-11-27 14:11:59.437881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.216 "name": "Existed_Raid", 00:12:29.216 "aliases": [ 00:12:29.216 "1a26f078-ae3d-43d9-b6b7-a4036100c330" 00:12:29.216 ], 00:12:29.216 "product_name": "Raid Volume", 00:12:29.216 "block_size": 512, 00:12:29.216 "num_blocks": 190464, 00:12:29.216 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:29.216 "assigned_rate_limits": { 00:12:29.216 "rw_ios_per_sec": 0, 00:12:29.216 "rw_mbytes_per_sec": 0, 00:12:29.216 "r_mbytes_per_sec": 0, 00:12:29.216 "w_mbytes_per_sec": 0 00:12:29.216 }, 00:12:29.216 "claimed": false, 00:12:29.216 "zoned": false, 00:12:29.216 "supported_io_types": { 00:12:29.216 "read": true, 00:12:29.216 "write": true, 00:12:29.216 "unmap": true, 00:12:29.216 "flush": true, 00:12:29.216 "reset": true, 00:12:29.216 "nvme_admin": false, 00:12:29.216 "nvme_io": false, 00:12:29.216 "nvme_io_md": false, 00:12:29.216 "write_zeroes": true, 00:12:29.216 "zcopy": false, 00:12:29.216 "get_zone_info": false, 00:12:29.216 "zone_management": false, 00:12:29.216 "zone_append": false, 00:12:29.216 "compare": false, 00:12:29.216 "compare_and_write": false, 00:12:29.216 "abort": false, 00:12:29.216 "seek_hole": false, 00:12:29.216 "seek_data": false, 00:12:29.216 "copy": false, 00:12:29.216 "nvme_iov_md": false 00:12:29.216 }, 00:12:29.216 "memory_domains": [ 00:12:29.216 { 00:12:29.216 "dma_device_id": "system", 00:12:29.216 "dma_device_type": 1 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.216 "dma_device_type": 2 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "dma_device_id": "system", 00:12:29.216 "dma_device_type": 1 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.216 "dma_device_type": 2 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "dma_device_id": "system", 00:12:29.216 "dma_device_type": 1 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.216 "dma_device_type": 2 00:12:29.216 } 00:12:29.216 ], 00:12:29.216 "driver_specific": { 00:12:29.216 "raid": { 00:12:29.216 "uuid": "1a26f078-ae3d-43d9-b6b7-a4036100c330", 00:12:29.216 "strip_size_kb": 64, 00:12:29.216 "state": "online", 00:12:29.216 "raid_level": "raid0", 00:12:29.216 "superblock": true, 00:12:29.216 "num_base_bdevs": 3, 00:12:29.216 "num_base_bdevs_discovered": 3, 00:12:29.216 "num_base_bdevs_operational": 3, 00:12:29.216 "base_bdevs_list": [ 00:12:29.216 { 00:12:29.216 "name": "NewBaseBdev", 00:12:29.216 "uuid": "34d5e832-6a53-4f2f-aec3-77b79131e08d", 00:12:29.216 "is_configured": true, 00:12:29.216 "data_offset": 2048, 00:12:29.216 "data_size": 63488 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "name": "BaseBdev2", 00:12:29.216 "uuid": "139acd8d-8a04-4773-abfc-50a440a1c3a5", 00:12:29.216 "is_configured": true, 00:12:29.216 "data_offset": 2048, 00:12:29.216 "data_size": 63488 00:12:29.216 }, 00:12:29.216 { 00:12:29.216 "name": "BaseBdev3", 00:12:29.216 "uuid": "9bf43660-f42d-4fe2-abf0-3d62a0b50763", 00:12:29.216 "is_configured": true, 00:12:29.216 "data_offset": 2048, 00:12:29.216 "data_size": 63488 00:12:29.216 } 00:12:29.216 ] 00:12:29.216 } 00:12:29.216 } 00:12:29.216 }' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:29.216 BaseBdev2 00:12:29.216 BaseBdev3' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.216 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.217 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.474 [2024-11-27 14:11:59.757550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.474 [2024-11-27 14:11:59.757585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.474 [2024-11-27 14:11:59.757693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.474 [2024-11-27 14:11:59.757764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.474 [2024-11-27 14:11:59.757784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64602 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64602 ']' 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64602 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64602 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.474 killing process with pid 64602 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64602' 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64602 00:12:29.474 [2024-11-27 14:11:59.798704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.474 14:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64602 00:12:29.731 [2024-11-27 14:12:00.075854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.671 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:30.671 00:12:30.671 real 0m11.931s 00:12:30.671 user 0m19.801s 00:12:30.671 sys 0m1.636s 00:12:30.671 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.671 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.671 ************************************ 00:12:30.671 END TEST raid_state_function_test_sb 00:12:30.671 ************************************ 00:12:30.928 14:12:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:30.928 14:12:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:30.928 14:12:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.928 14:12:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 ************************************ 00:12:30.928 START TEST raid_superblock_test 00:12:30.928 ************************************ 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65245 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65245 00:12:30.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65245 ']' 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.928 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 [2024-11-27 14:12:01.307127] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:30.928 [2024-11-27 14:12:01.307588] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65245 ] 00:12:31.186 [2024-11-27 14:12:01.488272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.186 [2024-11-27 14:12:01.622627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.444 [2024-11-27 14:12:01.828449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.444 [2024-11-27 14:12:01.828721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.702 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 malloc1 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 [2024-11-27 14:12:02.259309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.961 [2024-11-27 14:12:02.259554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.961 [2024-11-27 14:12:02.259635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:31.961 [2024-11-27 14:12:02.259914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.961 [2024-11-27 14:12:02.262987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.961 [2024-11-27 14:12:02.263034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.961 pt1 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 malloc2 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 [2024-11-27 14:12:02.317214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.961 [2024-11-27 14:12:02.317428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.961 [2024-11-27 14:12:02.317514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:31.961 [2024-11-27 14:12:02.317646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.961 [2024-11-27 14:12:02.320690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.961 [2024-11-27 14:12:02.320882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.961 pt2 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 malloc3 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 [2024-11-27 14:12:02.382480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.961 [2024-11-27 14:12:02.382693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.961 [2024-11-27 14:12:02.382774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:31.961 [2024-11-27 14:12:02.382963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.961 [2024-11-27 14:12:02.385928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.961 [2024-11-27 14:12:02.385986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.961 pt3 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.961 [2024-11-27 14:12:02.394629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.961 [2024-11-27 14:12:02.397120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.961 [2024-11-27 14:12:02.397220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.961 [2024-11-27 14:12:02.397432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:31.961 [2024-11-27 14:12:02.397457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:31.961 [2024-11-27 14:12:02.397784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:31.961 [2024-11-27 14:12:02.398033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:31.961 [2024-11-27 14:12:02.398051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:31.961 [2024-11-27 14:12:02.398241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.961 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.962 "name": "raid_bdev1", 00:12:31.962 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:31.962 "strip_size_kb": 64, 00:12:31.962 "state": "online", 00:12:31.962 "raid_level": "raid0", 00:12:31.962 "superblock": true, 00:12:31.962 "num_base_bdevs": 3, 00:12:31.962 "num_base_bdevs_discovered": 3, 00:12:31.962 "num_base_bdevs_operational": 3, 00:12:31.962 "base_bdevs_list": [ 00:12:31.962 { 00:12:31.962 "name": "pt1", 00:12:31.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.962 "is_configured": true, 00:12:31.962 "data_offset": 2048, 00:12:31.962 "data_size": 63488 00:12:31.962 }, 00:12:31.962 { 00:12:31.962 "name": "pt2", 00:12:31.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.962 "is_configured": true, 00:12:31.962 "data_offset": 2048, 00:12:31.962 "data_size": 63488 00:12:31.962 }, 00:12:31.962 { 00:12:31.962 "name": "pt3", 00:12:31.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.962 "is_configured": true, 00:12:31.962 "data_offset": 2048, 00:12:31.962 "data_size": 63488 00:12:31.962 } 00:12:31.962 ] 00:12:31.962 }' 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.962 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 [2024-11-27 14:12:02.971272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.528 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.528 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.528 "name": "raid_bdev1", 00:12:32.528 "aliases": [ 00:12:32.528 "674bae11-1c48-4e9b-b970-1855c7dcef04" 00:12:32.528 ], 00:12:32.528 "product_name": "Raid Volume", 00:12:32.528 "block_size": 512, 00:12:32.528 "num_blocks": 190464, 00:12:32.528 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:32.528 "assigned_rate_limits": { 00:12:32.528 "rw_ios_per_sec": 0, 00:12:32.528 "rw_mbytes_per_sec": 0, 00:12:32.528 "r_mbytes_per_sec": 0, 00:12:32.528 "w_mbytes_per_sec": 0 00:12:32.528 }, 00:12:32.528 "claimed": false, 00:12:32.528 "zoned": false, 00:12:32.528 "supported_io_types": { 00:12:32.528 "read": true, 00:12:32.528 "write": true, 00:12:32.528 "unmap": true, 00:12:32.528 "flush": true, 00:12:32.528 "reset": true, 00:12:32.528 "nvme_admin": false, 00:12:32.528 "nvme_io": false, 00:12:32.528 "nvme_io_md": false, 00:12:32.528 "write_zeroes": true, 00:12:32.528 "zcopy": false, 00:12:32.528 "get_zone_info": false, 00:12:32.528 "zone_management": false, 00:12:32.528 "zone_append": false, 00:12:32.528 "compare": false, 00:12:32.528 "compare_and_write": false, 00:12:32.528 "abort": false, 00:12:32.528 "seek_hole": false, 00:12:32.528 "seek_data": false, 00:12:32.528 "copy": false, 00:12:32.528 "nvme_iov_md": false 00:12:32.528 }, 00:12:32.528 "memory_domains": [ 00:12:32.528 { 00:12:32.528 "dma_device_id": "system", 00:12:32.528 "dma_device_type": 1 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.528 "dma_device_type": 2 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "dma_device_id": "system", 00:12:32.528 "dma_device_type": 1 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.528 "dma_device_type": 2 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "dma_device_id": "system", 00:12:32.528 "dma_device_type": 1 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.528 "dma_device_type": 2 00:12:32.528 } 00:12:32.528 ], 00:12:32.528 "driver_specific": { 00:12:32.528 "raid": { 00:12:32.528 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:32.528 "strip_size_kb": 64, 00:12:32.528 "state": "online", 00:12:32.528 "raid_level": "raid0", 00:12:32.528 "superblock": true, 00:12:32.528 "num_base_bdevs": 3, 00:12:32.528 "num_base_bdevs_discovered": 3, 00:12:32.528 "num_base_bdevs_operational": 3, 00:12:32.528 "base_bdevs_list": [ 00:12:32.528 { 00:12:32.528 "name": "pt1", 00:12:32.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.528 "is_configured": true, 00:12:32.528 "data_offset": 2048, 00:12:32.528 "data_size": 63488 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "name": "pt2", 00:12:32.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.528 "is_configured": true, 00:12:32.528 "data_offset": 2048, 00:12:32.528 "data_size": 63488 00:12:32.528 }, 00:12:32.528 { 00:12:32.528 "name": "pt3", 00:12:32.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.528 "is_configured": true, 00:12:32.528 "data_offset": 2048, 00:12:32.528 "data_size": 63488 00:12:32.528 } 00:12:32.528 ] 00:12:32.528 } 00:12:32.528 } 00:12:32.528 }' 00:12:32.528 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:32.786 pt2 00:12:32.786 pt3' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.786 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:32.786 [2024-11-27 14:12:03.283230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.044 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.044 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=674bae11-1c48-4e9b-b970-1855c7dcef04 00:12:33.044 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 674bae11-1c48-4e9b-b970-1855c7dcef04 ']' 00:12:33.044 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.044 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.044 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.044 [2024-11-27 14:12:03.330966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.044 [2024-11-27 14:12:03.331006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.044 [2024-11-27 14:12:03.331113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.044 [2024-11-27 14:12:03.331212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.045 [2024-11-27 14:12:03.331229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 [2024-11-27 14:12:03.475072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:33.045 [2024-11-27 14:12:03.477581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:33.045 [2024-11-27 14:12:03.477659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:33.045 [2024-11-27 14:12:03.477733] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:33.045 [2024-11-27 14:12:03.477826] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:33.045 [2024-11-27 14:12:03.477886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:33.045 [2024-11-27 14:12:03.477916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.045 [2024-11-27 14:12:03.477933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:33.045 request: 00:12:33.045 { 00:12:33.045 "name": "raid_bdev1", 00:12:33.045 "raid_level": "raid0", 00:12:33.045 "base_bdevs": [ 00:12:33.045 "malloc1", 00:12:33.045 "malloc2", 00:12:33.045 "malloc3" 00:12:33.045 ], 00:12:33.045 "strip_size_kb": 64, 00:12:33.045 "superblock": false, 00:12:33.045 "method": "bdev_raid_create", 00:12:33.045 "req_id": 1 00:12:33.045 } 00:12:33.045 Got JSON-RPC error response 00:12:33.045 response: 00:12:33.045 { 00:12:33.045 "code": -17, 00:12:33.045 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:33.045 } 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.045 [2024-11-27 14:12:03.542998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:33.045 [2024-11-27 14:12:03.543213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.045 [2024-11-27 14:12:03.543372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:33.045 [2024-11-27 14:12:03.543493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.045 [2024-11-27 14:12:03.546463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.045 [2024-11-27 14:12:03.546644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:33.045 [2024-11-27 14:12:03.546904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:33.045 [2024-11-27 14:12:03.547111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:33.045 pt1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.045 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.046 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.304 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.304 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.304 "name": "raid_bdev1", 00:12:33.304 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:33.304 "strip_size_kb": 64, 00:12:33.304 "state": "configuring", 00:12:33.304 "raid_level": "raid0", 00:12:33.304 "superblock": true, 00:12:33.304 "num_base_bdevs": 3, 00:12:33.304 "num_base_bdevs_discovered": 1, 00:12:33.304 "num_base_bdevs_operational": 3, 00:12:33.304 "base_bdevs_list": [ 00:12:33.304 { 00:12:33.304 "name": "pt1", 00:12:33.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.304 "is_configured": true, 00:12:33.304 "data_offset": 2048, 00:12:33.304 "data_size": 63488 00:12:33.304 }, 00:12:33.304 { 00:12:33.304 "name": null, 00:12:33.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.304 "is_configured": false, 00:12:33.304 "data_offset": 2048, 00:12:33.304 "data_size": 63488 00:12:33.304 }, 00:12:33.304 { 00:12:33.304 "name": null, 00:12:33.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.304 "is_configured": false, 00:12:33.304 "data_offset": 2048, 00:12:33.304 "data_size": 63488 00:12:33.304 } 00:12:33.304 ] 00:12:33.304 }' 00:12:33.304 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.304 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.562 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:33.562 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.562 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.562 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.562 [2024-11-27 14:12:04.071661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.562 [2024-11-27 14:12:04.071751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.562 [2024-11-27 14:12:04.071791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:33.562 [2024-11-27 14:12:04.071806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.562 [2024-11-27 14:12:04.072443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.821 [2024-11-27 14:12:04.072642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.821 [2024-11-27 14:12:04.072775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.821 [2024-11-27 14:12:04.072838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.821 pt2 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.821 [2024-11-27 14:12:04.079614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.821 "name": "raid_bdev1", 00:12:33.821 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:33.821 "strip_size_kb": 64, 00:12:33.821 "state": "configuring", 00:12:33.821 "raid_level": "raid0", 00:12:33.821 "superblock": true, 00:12:33.821 "num_base_bdevs": 3, 00:12:33.821 "num_base_bdevs_discovered": 1, 00:12:33.821 "num_base_bdevs_operational": 3, 00:12:33.821 "base_bdevs_list": [ 00:12:33.821 { 00:12:33.821 "name": "pt1", 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.821 "is_configured": true, 00:12:33.821 "data_offset": 2048, 00:12:33.821 "data_size": 63488 00:12:33.821 }, 00:12:33.821 { 00:12:33.821 "name": null, 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.821 "is_configured": false, 00:12:33.821 "data_offset": 0, 00:12:33.821 "data_size": 63488 00:12:33.821 }, 00:12:33.821 { 00:12:33.821 "name": null, 00:12:33.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.821 "is_configured": false, 00:12:33.821 "data_offset": 2048, 00:12:33.821 "data_size": 63488 00:12:33.821 } 00:12:33.821 ] 00:12:33.821 }' 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.821 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 [2024-11-27 14:12:04.599806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:34.388 [2024-11-27 14:12:04.600110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.388 [2024-11-27 14:12:04.600154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:34.388 [2024-11-27 14:12:04.600174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.388 [2024-11-27 14:12:04.600778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.388 [2024-11-27 14:12:04.600837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:34.388 [2024-11-27 14:12:04.600948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:34.388 [2024-11-27 14:12:04.600987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:34.388 pt2 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.388 [2024-11-27 14:12:04.611752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:34.388 [2024-11-27 14:12:04.611980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.388 [2024-11-27 14:12:04.612048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:34.388 [2024-11-27 14:12:04.612242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.388 [2024-11-27 14:12:04.612774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.388 [2024-11-27 14:12:04.612950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:34.388 [2024-11-27 14:12:04.613152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:34.388 [2024-11-27 14:12:04.613305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:34.388 [2024-11-27 14:12:04.613514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:34.388 [2024-11-27 14:12:04.613631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:34.388 [2024-11-27 14:12:04.614035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:34.388 [2024-11-27 14:12:04.614367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:34.388 [2024-11-27 14:12:04.614493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:34.388 [2024-11-27 14:12:04.614836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.388 pt3 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.388 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.389 "name": "raid_bdev1", 00:12:34.389 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:34.389 "strip_size_kb": 64, 00:12:34.389 "state": "online", 00:12:34.389 "raid_level": "raid0", 00:12:34.389 "superblock": true, 00:12:34.389 "num_base_bdevs": 3, 00:12:34.389 "num_base_bdevs_discovered": 3, 00:12:34.389 "num_base_bdevs_operational": 3, 00:12:34.389 "base_bdevs_list": [ 00:12:34.389 { 00:12:34.389 "name": "pt1", 00:12:34.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.389 "is_configured": true, 00:12:34.389 "data_offset": 2048, 00:12:34.389 "data_size": 63488 00:12:34.389 }, 00:12:34.389 { 00:12:34.389 "name": "pt2", 00:12:34.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.389 "is_configured": true, 00:12:34.389 "data_offset": 2048, 00:12:34.389 "data_size": 63488 00:12:34.389 }, 00:12:34.389 { 00:12:34.389 "name": "pt3", 00:12:34.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.389 "is_configured": true, 00:12:34.389 "data_offset": 2048, 00:12:34.389 "data_size": 63488 00:12:34.389 } 00:12:34.389 ] 00:12:34.389 }' 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.389 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.647 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.647 [2024-11-27 14:12:05.144397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.906 "name": "raid_bdev1", 00:12:34.906 "aliases": [ 00:12:34.906 "674bae11-1c48-4e9b-b970-1855c7dcef04" 00:12:34.906 ], 00:12:34.906 "product_name": "Raid Volume", 00:12:34.906 "block_size": 512, 00:12:34.906 "num_blocks": 190464, 00:12:34.906 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:34.906 "assigned_rate_limits": { 00:12:34.906 "rw_ios_per_sec": 0, 00:12:34.906 "rw_mbytes_per_sec": 0, 00:12:34.906 "r_mbytes_per_sec": 0, 00:12:34.906 "w_mbytes_per_sec": 0 00:12:34.906 }, 00:12:34.906 "claimed": false, 00:12:34.906 "zoned": false, 00:12:34.906 "supported_io_types": { 00:12:34.906 "read": true, 00:12:34.906 "write": true, 00:12:34.906 "unmap": true, 00:12:34.906 "flush": true, 00:12:34.906 "reset": true, 00:12:34.906 "nvme_admin": false, 00:12:34.906 "nvme_io": false, 00:12:34.906 "nvme_io_md": false, 00:12:34.906 "write_zeroes": true, 00:12:34.906 "zcopy": false, 00:12:34.906 "get_zone_info": false, 00:12:34.906 "zone_management": false, 00:12:34.906 "zone_append": false, 00:12:34.906 "compare": false, 00:12:34.906 "compare_and_write": false, 00:12:34.906 "abort": false, 00:12:34.906 "seek_hole": false, 00:12:34.906 "seek_data": false, 00:12:34.906 "copy": false, 00:12:34.906 "nvme_iov_md": false 00:12:34.906 }, 00:12:34.906 "memory_domains": [ 00:12:34.906 { 00:12:34.906 "dma_device_id": "system", 00:12:34.906 "dma_device_type": 1 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.906 "dma_device_type": 2 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "dma_device_id": "system", 00:12:34.906 "dma_device_type": 1 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.906 "dma_device_type": 2 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "dma_device_id": "system", 00:12:34.906 "dma_device_type": 1 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.906 "dma_device_type": 2 00:12:34.906 } 00:12:34.906 ], 00:12:34.906 "driver_specific": { 00:12:34.906 "raid": { 00:12:34.906 "uuid": "674bae11-1c48-4e9b-b970-1855c7dcef04", 00:12:34.906 "strip_size_kb": 64, 00:12:34.906 "state": "online", 00:12:34.906 "raid_level": "raid0", 00:12:34.906 "superblock": true, 00:12:34.906 "num_base_bdevs": 3, 00:12:34.906 "num_base_bdevs_discovered": 3, 00:12:34.906 "num_base_bdevs_operational": 3, 00:12:34.906 "base_bdevs_list": [ 00:12:34.906 { 00:12:34.906 "name": "pt1", 00:12:34.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.906 "is_configured": true, 00:12:34.906 "data_offset": 2048, 00:12:34.906 "data_size": 63488 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "name": "pt2", 00:12:34.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.906 "is_configured": true, 00:12:34.906 "data_offset": 2048, 00:12:34.906 "data_size": 63488 00:12:34.906 }, 00:12:34.906 { 00:12:34.906 "name": "pt3", 00:12:34.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.906 "is_configured": true, 00:12:34.906 "data_offset": 2048, 00:12:34.906 "data_size": 63488 00:12:34.906 } 00:12:34.906 ] 00:12:34.906 } 00:12:34.906 } 00:12:34.906 }' 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:34.906 pt2 00:12:34.906 pt3' 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.906 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.907 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:35.168 [2024-11-27 14:12:05.436410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 674bae11-1c48-4e9b-b970-1855c7dcef04 '!=' 674bae11-1c48-4e9b-b970-1855c7dcef04 ']' 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65245 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65245 ']' 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65245 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65245 00:12:35.168 killing process with pid 65245 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65245' 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65245 00:12:35.168 [2024-11-27 14:12:05.514735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.168 14:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65245 00:12:35.168 [2024-11-27 14:12:05.514865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.168 [2024-11-27 14:12:05.514961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.168 [2024-11-27 14:12:05.514983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:35.454 [2024-11-27 14:12:05.782981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.389 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:36.389 00:12:36.389 real 0m5.658s 00:12:36.389 user 0m8.512s 00:12:36.389 sys 0m0.794s 00:12:36.389 ************************************ 00:12:36.389 END TEST raid_superblock_test 00:12:36.389 ************************************ 00:12:36.389 14:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.389 14:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.389 14:12:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:36.389 14:12:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.389 14:12:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.389 14:12:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.648 ************************************ 00:12:36.648 START TEST raid_read_error_test 00:12:36.648 ************************************ 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wtSVoRiwD2 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65498 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65498 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65498 ']' 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.648 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.648 [2024-11-27 14:12:07.026272] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:36.648 [2024-11-27 14:12:07.026646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65498 ] 00:12:36.907 [2024-11-27 14:12:07.216148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.907 [2024-11-27 14:12:07.377535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.164 [2024-11-27 14:12:07.607500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.164 [2024-11-27 14:12:07.607736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.730 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.730 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:37.730 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.730 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.730 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.730 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.730 BaseBdev1_malloc 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 true 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 [2024-11-27 14:12:08.180320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:37.731 [2024-11-27 14:12:08.180403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.731 [2024-11-27 14:12:08.180435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:37.731 [2024-11-27 14:12:08.180453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.731 [2024-11-27 14:12:08.183413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.731 [2024-11-27 14:12:08.183466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.731 BaseBdev1 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 BaseBdev2_malloc 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 true 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.731 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 [2024-11-27 14:12:08.239563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:37.731 [2024-11-27 14:12:08.239633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.731 [2024-11-27 14:12:08.239659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:37.731 [2024-11-27 14:12:08.239677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.989 [2024-11-27 14:12:08.242596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.989 [2024-11-27 14:12:08.242793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.989 BaseBdev2 00:12:37.989 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.989 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.989 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.990 BaseBdev3_malloc 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.990 true 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.990 [2024-11-27 14:12:08.306695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:37.990 [2024-11-27 14:12:08.306930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.990 [2024-11-27 14:12:08.306968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:37.990 [2024-11-27 14:12:08.306988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.990 [2024-11-27 14:12:08.309908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.990 [2024-11-27 14:12:08.310087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:37.990 BaseBdev3 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.990 [2024-11-27 14:12:08.314993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.990 [2024-11-27 14:12:08.317569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.990 [2024-11-27 14:12:08.317873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.990 [2024-11-27 14:12:08.318215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:37.990 [2024-11-27 14:12:08.318238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:37.990 [2024-11-27 14:12:08.318557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:37.990 [2024-11-27 14:12:08.318804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:37.990 [2024-11-27 14:12:08.318847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:37.990 [2024-11-27 14:12:08.319082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.990 "name": "raid_bdev1", 00:12:37.990 "uuid": "d7dd67cf-1215-4228-91ba-c3eb5c1f0c60", 00:12:37.990 "strip_size_kb": 64, 00:12:37.990 "state": "online", 00:12:37.990 "raid_level": "raid0", 00:12:37.990 "superblock": true, 00:12:37.990 "num_base_bdevs": 3, 00:12:37.990 "num_base_bdevs_discovered": 3, 00:12:37.990 "num_base_bdevs_operational": 3, 00:12:37.990 "base_bdevs_list": [ 00:12:37.990 { 00:12:37.990 "name": "BaseBdev1", 00:12:37.990 "uuid": "c5c415bd-1bc8-5256-baef-1f58bd3fc73e", 00:12:37.990 "is_configured": true, 00:12:37.990 "data_offset": 2048, 00:12:37.990 "data_size": 63488 00:12:37.990 }, 00:12:37.990 { 00:12:37.990 "name": "BaseBdev2", 00:12:37.990 "uuid": "7b09d6a0-9087-57de-a80b-1af1b61e285f", 00:12:37.990 "is_configured": true, 00:12:37.990 "data_offset": 2048, 00:12:37.990 "data_size": 63488 00:12:37.990 }, 00:12:37.990 { 00:12:37.990 "name": "BaseBdev3", 00:12:37.990 "uuid": "b3d720d2-0979-5447-bb8e-abefa0112ae7", 00:12:37.990 "is_configured": true, 00:12:37.990 "data_offset": 2048, 00:12:37.990 "data_size": 63488 00:12:37.990 } 00:12:37.990 ] 00:12:37.990 }' 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.990 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.559 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:38.559 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:38.559 [2024-11-27 14:12:08.952660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.497 "name": "raid_bdev1", 00:12:39.497 "uuid": "d7dd67cf-1215-4228-91ba-c3eb5c1f0c60", 00:12:39.497 "strip_size_kb": 64, 00:12:39.497 "state": "online", 00:12:39.497 "raid_level": "raid0", 00:12:39.497 "superblock": true, 00:12:39.497 "num_base_bdevs": 3, 00:12:39.497 "num_base_bdevs_discovered": 3, 00:12:39.497 "num_base_bdevs_operational": 3, 00:12:39.497 "base_bdevs_list": [ 00:12:39.497 { 00:12:39.497 "name": "BaseBdev1", 00:12:39.497 "uuid": "c5c415bd-1bc8-5256-baef-1f58bd3fc73e", 00:12:39.497 "is_configured": true, 00:12:39.497 "data_offset": 2048, 00:12:39.497 "data_size": 63488 00:12:39.497 }, 00:12:39.497 { 00:12:39.497 "name": "BaseBdev2", 00:12:39.497 "uuid": "7b09d6a0-9087-57de-a80b-1af1b61e285f", 00:12:39.497 "is_configured": true, 00:12:39.497 "data_offset": 2048, 00:12:39.497 "data_size": 63488 00:12:39.497 }, 00:12:39.497 { 00:12:39.497 "name": "BaseBdev3", 00:12:39.497 "uuid": "b3d720d2-0979-5447-bb8e-abefa0112ae7", 00:12:39.497 "is_configured": true, 00:12:39.497 "data_offset": 2048, 00:12:39.497 "data_size": 63488 00:12:39.497 } 00:12:39.497 ] 00:12:39.497 }' 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.497 14:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.069 [2024-11-27 14:12:10.355585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.069 [2024-11-27 14:12:10.355751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.069 [2024-11-27 14:12:10.359376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.069 [2024-11-27 14:12:10.359546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.069 [2024-11-27 14:12:10.359747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.069 [2024-11-27 14:12:10.359909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:40.069 { 00:12:40.069 "results": [ 00:12:40.069 { 00:12:40.069 "job": "raid_bdev1", 00:12:40.069 "core_mask": "0x1", 00:12:40.069 "workload": "randrw", 00:12:40.069 "percentage": 50, 00:12:40.069 "status": "finished", 00:12:40.069 "queue_depth": 1, 00:12:40.069 "io_size": 131072, 00:12:40.069 "runtime": 1.400669, 00:12:40.069 "iops": 10678.468646054136, 00:12:40.069 "mibps": 1334.808580756767, 00:12:40.069 "io_failed": 1, 00:12:40.069 "io_timeout": 0, 00:12:40.069 "avg_latency_us": 130.29112594051222, 00:12:40.069 "min_latency_us": 28.625454545454545, 00:12:40.069 "max_latency_us": 1794.7927272727272 00:12:40.069 } 00:12:40.069 ], 00:12:40.069 "core_count": 1 00:12:40.069 } 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65498 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65498 ']' 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65498 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65498 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65498' 00:12:40.069 killing process with pid 65498 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65498 00:12:40.069 [2024-11-27 14:12:10.397930] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.069 14:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65498 00:12:40.327 [2024-11-27 14:12:10.603093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wtSVoRiwD2 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:41.261 00:12:41.261 real 0m4.808s 00:12:41.261 user 0m6.060s 00:12:41.261 sys 0m0.563s 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.261 ************************************ 00:12:41.261 END TEST raid_read_error_test 00:12:41.261 ************************************ 00:12:41.261 14:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.261 14:12:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:41.261 14:12:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:41.261 14:12:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.261 14:12:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.261 ************************************ 00:12:41.261 START TEST raid_write_error_test 00:12:41.261 ************************************ 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.261 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.262 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:41.262 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.262 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.262 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:41.262 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:41.262 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zmCouEwgs6 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65644 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65644 00:12:41.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65644 ']' 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.520 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.520 [2024-11-27 14:12:11.868990] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:41.520 [2024-11-27 14:12:11.869282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65644 ] 00:12:41.779 [2024-11-27 14:12:12.045136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.779 [2024-11-27 14:12:12.176908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.104 [2024-11-27 14:12:12.381547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.104 [2024-11-27 14:12:12.381860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 BaseBdev1_malloc 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 true 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 [2024-11-27 14:12:12.939183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:42.672 [2024-11-27 14:12:12.939416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.672 [2024-11-27 14:12:12.939509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:42.672 [2024-11-27 14:12:12.939682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.672 [2024-11-27 14:12:12.942662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.672 [2024-11-27 14:12:12.942840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.672 BaseBdev1 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 BaseBdev2_malloc 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 true 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 [2024-11-27 14:12:12.999831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:42.672 [2024-11-27 14:12:13.000039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.672 [2024-11-27 14:12:13.000076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:42.672 [2024-11-27 14:12:13.000096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.672 [2024-11-27 14:12:13.002923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.672 [2024-11-27 14:12:13.002973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.672 BaseBdev2 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 BaseBdev3_malloc 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 true 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.672 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.672 [2024-11-27 14:12:13.081614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:42.672 [2024-11-27 14:12:13.081811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.673 [2024-11-27 14:12:13.081897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:42.673 [2024-11-27 14:12:13.082038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.673 [2024-11-27 14:12:13.085044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.673 [2024-11-27 14:12:13.085098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:42.673 BaseBdev3 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.673 [2024-11-27 14:12:13.089763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.673 [2024-11-27 14:12:13.092258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.673 [2024-11-27 14:12:13.092516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.673 [2024-11-27 14:12:13.092797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:42.673 [2024-11-27 14:12:13.092843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:42.673 [2024-11-27 14:12:13.093166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:42.673 [2024-11-27 14:12:13.093434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:42.673 [2024-11-27 14:12:13.093457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:42.673 [2024-11-27 14:12:13.093700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.673 "name": "raid_bdev1", 00:12:42.673 "uuid": "dcbc89b4-b63c-4281-afba-b0958c6dcd39", 00:12:42.673 "strip_size_kb": 64, 00:12:42.673 "state": "online", 00:12:42.673 "raid_level": "raid0", 00:12:42.673 "superblock": true, 00:12:42.673 "num_base_bdevs": 3, 00:12:42.673 "num_base_bdevs_discovered": 3, 00:12:42.673 "num_base_bdevs_operational": 3, 00:12:42.673 "base_bdevs_list": [ 00:12:42.673 { 00:12:42.673 "name": "BaseBdev1", 00:12:42.673 "uuid": "7f76cbe9-f14a-53a1-98c9-6acd9a341354", 00:12:42.673 "is_configured": true, 00:12:42.673 "data_offset": 2048, 00:12:42.673 "data_size": 63488 00:12:42.673 }, 00:12:42.673 { 00:12:42.673 "name": "BaseBdev2", 00:12:42.673 "uuid": "84c9277f-0109-5682-8d5f-0cb1351694c5", 00:12:42.673 "is_configured": true, 00:12:42.673 "data_offset": 2048, 00:12:42.673 "data_size": 63488 00:12:42.673 }, 00:12:42.673 { 00:12:42.673 "name": "BaseBdev3", 00:12:42.673 "uuid": "892e2422-fbde-59b6-8935-c6465538d0e8", 00:12:42.673 "is_configured": true, 00:12:42.673 "data_offset": 2048, 00:12:42.673 "data_size": 63488 00:12:42.673 } 00:12:42.673 ] 00:12:42.673 }' 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.673 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:43.239 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:43.239 [2024-11-27 14:12:13.715341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.172 "name": "raid_bdev1", 00:12:44.172 "uuid": "dcbc89b4-b63c-4281-afba-b0958c6dcd39", 00:12:44.172 "strip_size_kb": 64, 00:12:44.172 "state": "online", 00:12:44.172 "raid_level": "raid0", 00:12:44.172 "superblock": true, 00:12:44.172 "num_base_bdevs": 3, 00:12:44.172 "num_base_bdevs_discovered": 3, 00:12:44.172 "num_base_bdevs_operational": 3, 00:12:44.172 "base_bdevs_list": [ 00:12:44.172 { 00:12:44.172 "name": "BaseBdev1", 00:12:44.172 "uuid": "7f76cbe9-f14a-53a1-98c9-6acd9a341354", 00:12:44.172 "is_configured": true, 00:12:44.172 "data_offset": 2048, 00:12:44.172 "data_size": 63488 00:12:44.172 }, 00:12:44.172 { 00:12:44.172 "name": "BaseBdev2", 00:12:44.172 "uuid": "84c9277f-0109-5682-8d5f-0cb1351694c5", 00:12:44.172 "is_configured": true, 00:12:44.172 "data_offset": 2048, 00:12:44.172 "data_size": 63488 00:12:44.172 }, 00:12:44.172 { 00:12:44.172 "name": "BaseBdev3", 00:12:44.172 "uuid": "892e2422-fbde-59b6-8935-c6465538d0e8", 00:12:44.172 "is_configured": true, 00:12:44.172 "data_offset": 2048, 00:12:44.172 "data_size": 63488 00:12:44.172 } 00:12:44.172 ] 00:12:44.172 }' 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.172 14:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.789 [2024-11-27 14:12:15.118583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.789 [2024-11-27 14:12:15.118617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.789 [2024-11-27 14:12:15.122089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.789 [2024-11-27 14:12:15.122147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.789 [2024-11-27 14:12:15.122201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.789 [2024-11-27 14:12:15.122216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:44.789 { 00:12:44.789 "results": [ 00:12:44.789 { 00:12:44.789 "job": "raid_bdev1", 00:12:44.789 "core_mask": "0x1", 00:12:44.789 "workload": "randrw", 00:12:44.789 "percentage": 50, 00:12:44.789 "status": "finished", 00:12:44.789 "queue_depth": 1, 00:12:44.789 "io_size": 131072, 00:12:44.789 "runtime": 1.400674, 00:12:44.789 "iops": 10409.274392185476, 00:12:44.789 "mibps": 1301.1592990231845, 00:12:44.789 "io_failed": 1, 00:12:44.789 "io_timeout": 0, 00:12:44.789 "avg_latency_us": 133.8150295216066, 00:12:44.789 "min_latency_us": 40.72727272727273, 00:12:44.789 "max_latency_us": 1899.0545454545454 00:12:44.789 } 00:12:44.789 ], 00:12:44.789 "core_count": 1 00:12:44.789 } 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65644 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65644 ']' 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65644 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65644 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65644' 00:12:44.789 killing process with pid 65644 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65644 00:12:44.789 [2024-11-27 14:12:15.163876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.789 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65644 00:12:45.046 [2024-11-27 14:12:15.368604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zmCouEwgs6 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:45.980 ************************************ 00:12:45.980 END TEST raid_write_error_test 00:12:45.980 ************************************ 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:45.980 00:12:45.980 real 0m4.714s 00:12:45.980 user 0m5.836s 00:12:45.980 sys 0m0.581s 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.980 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.239 14:12:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:46.239 14:12:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:46.239 14:12:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:46.239 14:12:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.239 14:12:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.239 ************************************ 00:12:46.239 START TEST raid_state_function_test 00:12:46.239 ************************************ 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:46.239 Process raid pid: 65802 00:12:46.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65802 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65802' 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65802 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65802 ']' 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.239 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.239 [2024-11-27 14:12:16.632424] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:46.239 [2024-11-27 14:12:16.632773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.498 [2024-11-27 14:12:16.807512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.498 [2024-11-27 14:12:16.937411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.756 [2024-11-27 14:12:17.146108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.756 [2024-11-27 14:12:17.146407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.322 [2024-11-27 14:12:17.655217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.322 [2024-11-27 14:12:17.655449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.322 [2024-11-27 14:12:17.655615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.322 [2024-11-27 14:12:17.655681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.322 [2024-11-27 14:12:17.655858] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.322 [2024-11-27 14:12:17.655919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.322 "name": "Existed_Raid", 00:12:47.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.322 "strip_size_kb": 64, 00:12:47.322 "state": "configuring", 00:12:47.322 "raid_level": "concat", 00:12:47.322 "superblock": false, 00:12:47.322 "num_base_bdevs": 3, 00:12:47.322 "num_base_bdevs_discovered": 0, 00:12:47.322 "num_base_bdevs_operational": 3, 00:12:47.322 "base_bdevs_list": [ 00:12:47.322 { 00:12:47.322 "name": "BaseBdev1", 00:12:47.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.322 "is_configured": false, 00:12:47.322 "data_offset": 0, 00:12:47.322 "data_size": 0 00:12:47.322 }, 00:12:47.322 { 00:12:47.322 "name": "BaseBdev2", 00:12:47.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.322 "is_configured": false, 00:12:47.322 "data_offset": 0, 00:12:47.322 "data_size": 0 00:12:47.322 }, 00:12:47.322 { 00:12:47.322 "name": "BaseBdev3", 00:12:47.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.322 "is_configured": false, 00:12:47.322 "data_offset": 0, 00:12:47.322 "data_size": 0 00:12:47.322 } 00:12:47.322 ] 00:12:47.322 }' 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 [2024-11-27 14:12:18.139246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.890 [2024-11-27 14:12:18.139289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 [2024-11-27 14:12:18.147240] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.890 [2024-11-27 14:12:18.147425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.890 [2024-11-27 14:12:18.147547] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.890 [2024-11-27 14:12:18.147687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.890 [2024-11-27 14:12:18.147808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.890 [2024-11-27 14:12:18.147883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 [2024-11-27 14:12:18.192428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.890 BaseBdev1 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 [ 00:12:47.890 { 00:12:47.890 "name": "BaseBdev1", 00:12:47.890 "aliases": [ 00:12:47.890 "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f" 00:12:47.890 ], 00:12:47.890 "product_name": "Malloc disk", 00:12:47.890 "block_size": 512, 00:12:47.890 "num_blocks": 65536, 00:12:47.890 "uuid": "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f", 00:12:47.890 "assigned_rate_limits": { 00:12:47.890 "rw_ios_per_sec": 0, 00:12:47.890 "rw_mbytes_per_sec": 0, 00:12:47.890 "r_mbytes_per_sec": 0, 00:12:47.890 "w_mbytes_per_sec": 0 00:12:47.890 }, 00:12:47.890 "claimed": true, 00:12:47.890 "claim_type": "exclusive_write", 00:12:47.890 "zoned": false, 00:12:47.890 "supported_io_types": { 00:12:47.890 "read": true, 00:12:47.890 "write": true, 00:12:47.890 "unmap": true, 00:12:47.890 "flush": true, 00:12:47.890 "reset": true, 00:12:47.890 "nvme_admin": false, 00:12:47.890 "nvme_io": false, 00:12:47.890 "nvme_io_md": false, 00:12:47.890 "write_zeroes": true, 00:12:47.890 "zcopy": true, 00:12:47.890 "get_zone_info": false, 00:12:47.890 "zone_management": false, 00:12:47.890 "zone_append": false, 00:12:47.890 "compare": false, 00:12:47.890 "compare_and_write": false, 00:12:47.890 "abort": true, 00:12:47.890 "seek_hole": false, 00:12:47.890 "seek_data": false, 00:12:47.890 "copy": true, 00:12:47.890 "nvme_iov_md": false 00:12:47.890 }, 00:12:47.890 "memory_domains": [ 00:12:47.890 { 00:12:47.890 "dma_device_id": "system", 00:12:47.890 "dma_device_type": 1 00:12:47.890 }, 00:12:47.890 { 00:12:47.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.890 "dma_device_type": 2 00:12:47.890 } 00:12:47.890 ], 00:12:47.890 "driver_specific": {} 00:12:47.890 } 00:12:47.890 ] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.890 "name": "Existed_Raid", 00:12:47.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.890 "strip_size_kb": 64, 00:12:47.890 "state": "configuring", 00:12:47.890 "raid_level": "concat", 00:12:47.890 "superblock": false, 00:12:47.890 "num_base_bdevs": 3, 00:12:47.890 "num_base_bdevs_discovered": 1, 00:12:47.890 "num_base_bdevs_operational": 3, 00:12:47.890 "base_bdevs_list": [ 00:12:47.890 { 00:12:47.891 "name": "BaseBdev1", 00:12:47.891 "uuid": "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f", 00:12:47.891 "is_configured": true, 00:12:47.891 "data_offset": 0, 00:12:47.891 "data_size": 65536 00:12:47.891 }, 00:12:47.891 { 00:12:47.891 "name": "BaseBdev2", 00:12:47.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.891 "is_configured": false, 00:12:47.891 "data_offset": 0, 00:12:47.891 "data_size": 0 00:12:47.891 }, 00:12:47.891 { 00:12:47.891 "name": "BaseBdev3", 00:12:47.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.891 "is_configured": false, 00:12:47.891 "data_offset": 0, 00:12:47.891 "data_size": 0 00:12:47.891 } 00:12:47.891 ] 00:12:47.891 }' 00:12:47.891 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.891 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 [2024-11-27 14:12:18.748620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.457 [2024-11-27 14:12:18.748814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 [2024-11-27 14:12:18.756671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.457 [2024-11-27 14:12:18.759219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.457 [2024-11-27 14:12:18.759400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.457 [2024-11-27 14:12:18.759522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.457 [2024-11-27 14:12:18.759582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.457 "name": "Existed_Raid", 00:12:48.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.457 "strip_size_kb": 64, 00:12:48.457 "state": "configuring", 00:12:48.457 "raid_level": "concat", 00:12:48.457 "superblock": false, 00:12:48.457 "num_base_bdevs": 3, 00:12:48.457 "num_base_bdevs_discovered": 1, 00:12:48.457 "num_base_bdevs_operational": 3, 00:12:48.457 "base_bdevs_list": [ 00:12:48.457 { 00:12:48.457 "name": "BaseBdev1", 00:12:48.457 "uuid": "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f", 00:12:48.457 "is_configured": true, 00:12:48.457 "data_offset": 0, 00:12:48.457 "data_size": 65536 00:12:48.457 }, 00:12:48.457 { 00:12:48.457 "name": "BaseBdev2", 00:12:48.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.457 "is_configured": false, 00:12:48.457 "data_offset": 0, 00:12:48.457 "data_size": 0 00:12:48.457 }, 00:12:48.457 { 00:12:48.457 "name": "BaseBdev3", 00:12:48.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.457 "is_configured": false, 00:12:48.457 "data_offset": 0, 00:12:48.457 "data_size": 0 00:12:48.457 } 00:12:48.457 ] 00:12:48.457 }' 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.457 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.022 [2024-11-27 14:12:19.315109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.022 BaseBdev2 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.022 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.022 [ 00:12:49.022 { 00:12:49.023 "name": "BaseBdev2", 00:12:49.023 "aliases": [ 00:12:49.023 "0c334cb5-e4c2-4129-9607-dc4b668eeb12" 00:12:49.023 ], 00:12:49.023 "product_name": "Malloc disk", 00:12:49.023 "block_size": 512, 00:12:49.023 "num_blocks": 65536, 00:12:49.023 "uuid": "0c334cb5-e4c2-4129-9607-dc4b668eeb12", 00:12:49.023 "assigned_rate_limits": { 00:12:49.023 "rw_ios_per_sec": 0, 00:12:49.023 "rw_mbytes_per_sec": 0, 00:12:49.023 "r_mbytes_per_sec": 0, 00:12:49.023 "w_mbytes_per_sec": 0 00:12:49.023 }, 00:12:49.023 "claimed": true, 00:12:49.023 "claim_type": "exclusive_write", 00:12:49.023 "zoned": false, 00:12:49.023 "supported_io_types": { 00:12:49.023 "read": true, 00:12:49.023 "write": true, 00:12:49.023 "unmap": true, 00:12:49.023 "flush": true, 00:12:49.023 "reset": true, 00:12:49.023 "nvme_admin": false, 00:12:49.023 "nvme_io": false, 00:12:49.023 "nvme_io_md": false, 00:12:49.023 "write_zeroes": true, 00:12:49.023 "zcopy": true, 00:12:49.023 "get_zone_info": false, 00:12:49.023 "zone_management": false, 00:12:49.023 "zone_append": false, 00:12:49.023 "compare": false, 00:12:49.023 "compare_and_write": false, 00:12:49.023 "abort": true, 00:12:49.023 "seek_hole": false, 00:12:49.023 "seek_data": false, 00:12:49.023 "copy": true, 00:12:49.023 "nvme_iov_md": false 00:12:49.023 }, 00:12:49.023 "memory_domains": [ 00:12:49.023 { 00:12:49.023 "dma_device_id": "system", 00:12:49.023 "dma_device_type": 1 00:12:49.023 }, 00:12:49.023 { 00:12:49.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.023 "dma_device_type": 2 00:12:49.023 } 00:12:49.023 ], 00:12:49.023 "driver_specific": {} 00:12:49.023 } 00:12:49.023 ] 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.023 "name": "Existed_Raid", 00:12:49.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.023 "strip_size_kb": 64, 00:12:49.023 "state": "configuring", 00:12:49.023 "raid_level": "concat", 00:12:49.023 "superblock": false, 00:12:49.023 "num_base_bdevs": 3, 00:12:49.023 "num_base_bdevs_discovered": 2, 00:12:49.023 "num_base_bdevs_operational": 3, 00:12:49.023 "base_bdevs_list": [ 00:12:49.023 { 00:12:49.023 "name": "BaseBdev1", 00:12:49.023 "uuid": "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f", 00:12:49.023 "is_configured": true, 00:12:49.023 "data_offset": 0, 00:12:49.023 "data_size": 65536 00:12:49.023 }, 00:12:49.023 { 00:12:49.023 "name": "BaseBdev2", 00:12:49.023 "uuid": "0c334cb5-e4c2-4129-9607-dc4b668eeb12", 00:12:49.023 "is_configured": true, 00:12:49.023 "data_offset": 0, 00:12:49.023 "data_size": 65536 00:12:49.023 }, 00:12:49.023 { 00:12:49.023 "name": "BaseBdev3", 00:12:49.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.023 "is_configured": false, 00:12:49.023 "data_offset": 0, 00:12:49.023 "data_size": 0 00:12:49.023 } 00:12:49.023 ] 00:12:49.023 }' 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.023 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.590 [2024-11-27 14:12:19.933498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.590 [2024-11-27 14:12:19.933555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:49.590 [2024-11-27 14:12:19.933575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:49.590 [2024-11-27 14:12:19.933937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:49.590 [2024-11-27 14:12:19.934184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:49.590 [2024-11-27 14:12:19.934203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:49.590 [2024-11-27 14:12:19.934526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.590 BaseBdev3 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.590 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.591 [ 00:12:49.591 { 00:12:49.591 "name": "BaseBdev3", 00:12:49.591 "aliases": [ 00:12:49.591 "1ad6bdc5-5cb8-4410-9eaa-8901af170b03" 00:12:49.591 ], 00:12:49.591 "product_name": "Malloc disk", 00:12:49.591 "block_size": 512, 00:12:49.591 "num_blocks": 65536, 00:12:49.591 "uuid": "1ad6bdc5-5cb8-4410-9eaa-8901af170b03", 00:12:49.591 "assigned_rate_limits": { 00:12:49.591 "rw_ios_per_sec": 0, 00:12:49.591 "rw_mbytes_per_sec": 0, 00:12:49.591 "r_mbytes_per_sec": 0, 00:12:49.591 "w_mbytes_per_sec": 0 00:12:49.591 }, 00:12:49.591 "claimed": true, 00:12:49.591 "claim_type": "exclusive_write", 00:12:49.591 "zoned": false, 00:12:49.591 "supported_io_types": { 00:12:49.591 "read": true, 00:12:49.591 "write": true, 00:12:49.591 "unmap": true, 00:12:49.591 "flush": true, 00:12:49.591 "reset": true, 00:12:49.591 "nvme_admin": false, 00:12:49.591 "nvme_io": false, 00:12:49.591 "nvme_io_md": false, 00:12:49.591 "write_zeroes": true, 00:12:49.591 "zcopy": true, 00:12:49.591 "get_zone_info": false, 00:12:49.591 "zone_management": false, 00:12:49.591 "zone_append": false, 00:12:49.591 "compare": false, 00:12:49.591 "compare_and_write": false, 00:12:49.591 "abort": true, 00:12:49.591 "seek_hole": false, 00:12:49.591 "seek_data": false, 00:12:49.591 "copy": true, 00:12:49.591 "nvme_iov_md": false 00:12:49.591 }, 00:12:49.591 "memory_domains": [ 00:12:49.591 { 00:12:49.591 "dma_device_id": "system", 00:12:49.591 "dma_device_type": 1 00:12:49.591 }, 00:12:49.591 { 00:12:49.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.591 "dma_device_type": 2 00:12:49.591 } 00:12:49.591 ], 00:12:49.591 "driver_specific": {} 00:12:49.591 } 00:12:49.591 ] 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.591 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.591 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.591 "name": "Existed_Raid", 00:12:49.591 "uuid": "05560c68-dad5-4e5e-8079-a97e8c02c27b", 00:12:49.591 "strip_size_kb": 64, 00:12:49.591 "state": "online", 00:12:49.591 "raid_level": "concat", 00:12:49.591 "superblock": false, 00:12:49.591 "num_base_bdevs": 3, 00:12:49.591 "num_base_bdevs_discovered": 3, 00:12:49.591 "num_base_bdevs_operational": 3, 00:12:49.591 "base_bdevs_list": [ 00:12:49.591 { 00:12:49.591 "name": "BaseBdev1", 00:12:49.591 "uuid": "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f", 00:12:49.591 "is_configured": true, 00:12:49.591 "data_offset": 0, 00:12:49.591 "data_size": 65536 00:12:49.591 }, 00:12:49.591 { 00:12:49.591 "name": "BaseBdev2", 00:12:49.591 "uuid": "0c334cb5-e4c2-4129-9607-dc4b668eeb12", 00:12:49.591 "is_configured": true, 00:12:49.591 "data_offset": 0, 00:12:49.591 "data_size": 65536 00:12:49.591 }, 00:12:49.591 { 00:12:49.591 "name": "BaseBdev3", 00:12:49.591 "uuid": "1ad6bdc5-5cb8-4410-9eaa-8901af170b03", 00:12:49.591 "is_configured": true, 00:12:49.591 "data_offset": 0, 00:12:49.591 "data_size": 65536 00:12:49.591 } 00:12:49.591 ] 00:12:49.591 }' 00:12:49.591 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.591 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.158 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.159 [2024-11-27 14:12:20.494087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.159 "name": "Existed_Raid", 00:12:50.159 "aliases": [ 00:12:50.159 "05560c68-dad5-4e5e-8079-a97e8c02c27b" 00:12:50.159 ], 00:12:50.159 "product_name": "Raid Volume", 00:12:50.159 "block_size": 512, 00:12:50.159 "num_blocks": 196608, 00:12:50.159 "uuid": "05560c68-dad5-4e5e-8079-a97e8c02c27b", 00:12:50.159 "assigned_rate_limits": { 00:12:50.159 "rw_ios_per_sec": 0, 00:12:50.159 "rw_mbytes_per_sec": 0, 00:12:50.159 "r_mbytes_per_sec": 0, 00:12:50.159 "w_mbytes_per_sec": 0 00:12:50.159 }, 00:12:50.159 "claimed": false, 00:12:50.159 "zoned": false, 00:12:50.159 "supported_io_types": { 00:12:50.159 "read": true, 00:12:50.159 "write": true, 00:12:50.159 "unmap": true, 00:12:50.159 "flush": true, 00:12:50.159 "reset": true, 00:12:50.159 "nvme_admin": false, 00:12:50.159 "nvme_io": false, 00:12:50.159 "nvme_io_md": false, 00:12:50.159 "write_zeroes": true, 00:12:50.159 "zcopy": false, 00:12:50.159 "get_zone_info": false, 00:12:50.159 "zone_management": false, 00:12:50.159 "zone_append": false, 00:12:50.159 "compare": false, 00:12:50.159 "compare_and_write": false, 00:12:50.159 "abort": false, 00:12:50.159 "seek_hole": false, 00:12:50.159 "seek_data": false, 00:12:50.159 "copy": false, 00:12:50.159 "nvme_iov_md": false 00:12:50.159 }, 00:12:50.159 "memory_domains": [ 00:12:50.159 { 00:12:50.159 "dma_device_id": "system", 00:12:50.159 "dma_device_type": 1 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.159 "dma_device_type": 2 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "dma_device_id": "system", 00:12:50.159 "dma_device_type": 1 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.159 "dma_device_type": 2 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "dma_device_id": "system", 00:12:50.159 "dma_device_type": 1 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.159 "dma_device_type": 2 00:12:50.159 } 00:12:50.159 ], 00:12:50.159 "driver_specific": { 00:12:50.159 "raid": { 00:12:50.159 "uuid": "05560c68-dad5-4e5e-8079-a97e8c02c27b", 00:12:50.159 "strip_size_kb": 64, 00:12:50.159 "state": "online", 00:12:50.159 "raid_level": "concat", 00:12:50.159 "superblock": false, 00:12:50.159 "num_base_bdevs": 3, 00:12:50.159 "num_base_bdevs_discovered": 3, 00:12:50.159 "num_base_bdevs_operational": 3, 00:12:50.159 "base_bdevs_list": [ 00:12:50.159 { 00:12:50.159 "name": "BaseBdev1", 00:12:50.159 "uuid": "9a6d6f30-5fe1-43e6-80f4-3bb1d52f518f", 00:12:50.159 "is_configured": true, 00:12:50.159 "data_offset": 0, 00:12:50.159 "data_size": 65536 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "name": "BaseBdev2", 00:12:50.159 "uuid": "0c334cb5-e4c2-4129-9607-dc4b668eeb12", 00:12:50.159 "is_configured": true, 00:12:50.159 "data_offset": 0, 00:12:50.159 "data_size": 65536 00:12:50.159 }, 00:12:50.159 { 00:12:50.159 "name": "BaseBdev3", 00:12:50.159 "uuid": "1ad6bdc5-5cb8-4410-9eaa-8901af170b03", 00:12:50.159 "is_configured": true, 00:12:50.159 "data_offset": 0, 00:12:50.159 "data_size": 65536 00:12:50.159 } 00:12:50.159 ] 00:12:50.159 } 00:12:50.159 } 00:12:50.159 }' 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:50.159 BaseBdev2 00:12:50.159 BaseBdev3' 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.159 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.419 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.419 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.419 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.419 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.420 [2024-11-27 14:12:20.801787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.420 [2024-11-27 14:12:20.801954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.420 [2024-11-27 14:12:20.802179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.420 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.678 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.678 "name": "Existed_Raid", 00:12:50.678 "uuid": "05560c68-dad5-4e5e-8079-a97e8c02c27b", 00:12:50.678 "strip_size_kb": 64, 00:12:50.678 "state": "offline", 00:12:50.678 "raid_level": "concat", 00:12:50.678 "superblock": false, 00:12:50.678 "num_base_bdevs": 3, 00:12:50.678 "num_base_bdevs_discovered": 2, 00:12:50.678 "num_base_bdevs_operational": 2, 00:12:50.678 "base_bdevs_list": [ 00:12:50.678 { 00:12:50.678 "name": null, 00:12:50.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.678 "is_configured": false, 00:12:50.678 "data_offset": 0, 00:12:50.678 "data_size": 65536 00:12:50.678 }, 00:12:50.678 { 00:12:50.678 "name": "BaseBdev2", 00:12:50.678 "uuid": "0c334cb5-e4c2-4129-9607-dc4b668eeb12", 00:12:50.678 "is_configured": true, 00:12:50.678 "data_offset": 0, 00:12:50.678 "data_size": 65536 00:12:50.678 }, 00:12:50.678 { 00:12:50.678 "name": "BaseBdev3", 00:12:50.678 "uuid": "1ad6bdc5-5cb8-4410-9eaa-8901af170b03", 00:12:50.678 "is_configured": true, 00:12:50.678 "data_offset": 0, 00:12:50.678 "data_size": 65536 00:12:50.678 } 00:12:50.678 ] 00:12:50.678 }' 00:12:50.678 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.678 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.937 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.937 [2024-11-27 14:12:21.436770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.195 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.196 [2024-11-27 14:12:21.584967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.196 [2024-11-27 14:12:21.585160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.196 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.455 BaseBdev2 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.455 [ 00:12:51.455 { 00:12:51.455 "name": "BaseBdev2", 00:12:51.455 "aliases": [ 00:12:51.455 "650950f0-689c-457e-9087-a8a29abbaa43" 00:12:51.455 ], 00:12:51.455 "product_name": "Malloc disk", 00:12:51.455 "block_size": 512, 00:12:51.455 "num_blocks": 65536, 00:12:51.455 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:51.455 "assigned_rate_limits": { 00:12:51.455 "rw_ios_per_sec": 0, 00:12:51.455 "rw_mbytes_per_sec": 0, 00:12:51.455 "r_mbytes_per_sec": 0, 00:12:51.455 "w_mbytes_per_sec": 0 00:12:51.455 }, 00:12:51.455 "claimed": false, 00:12:51.455 "zoned": false, 00:12:51.455 "supported_io_types": { 00:12:51.455 "read": true, 00:12:51.455 "write": true, 00:12:51.455 "unmap": true, 00:12:51.455 "flush": true, 00:12:51.455 "reset": true, 00:12:51.455 "nvme_admin": false, 00:12:51.455 "nvme_io": false, 00:12:51.455 "nvme_io_md": false, 00:12:51.455 "write_zeroes": true, 00:12:51.455 "zcopy": true, 00:12:51.455 "get_zone_info": false, 00:12:51.455 "zone_management": false, 00:12:51.455 "zone_append": false, 00:12:51.455 "compare": false, 00:12:51.455 "compare_and_write": false, 00:12:51.455 "abort": true, 00:12:51.455 "seek_hole": false, 00:12:51.455 "seek_data": false, 00:12:51.455 "copy": true, 00:12:51.455 "nvme_iov_md": false 00:12:51.455 }, 00:12:51.455 "memory_domains": [ 00:12:51.455 { 00:12:51.455 "dma_device_id": "system", 00:12:51.455 "dma_device_type": 1 00:12:51.455 }, 00:12:51.455 { 00:12:51.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.455 "dma_device_type": 2 00:12:51.455 } 00:12:51.455 ], 00:12:51.455 "driver_specific": {} 00:12:51.455 } 00:12:51.455 ] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.455 BaseBdev3 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.455 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.455 [ 00:12:51.455 { 00:12:51.455 "name": "BaseBdev3", 00:12:51.455 "aliases": [ 00:12:51.455 "99a8cb70-1f00-4df0-a0e4-71de8f985299" 00:12:51.455 ], 00:12:51.455 "product_name": "Malloc disk", 00:12:51.455 "block_size": 512, 00:12:51.455 "num_blocks": 65536, 00:12:51.455 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:51.455 "assigned_rate_limits": { 00:12:51.455 "rw_ios_per_sec": 0, 00:12:51.455 "rw_mbytes_per_sec": 0, 00:12:51.455 "r_mbytes_per_sec": 0, 00:12:51.455 "w_mbytes_per_sec": 0 00:12:51.455 }, 00:12:51.456 "claimed": false, 00:12:51.456 "zoned": false, 00:12:51.456 "supported_io_types": { 00:12:51.456 "read": true, 00:12:51.456 "write": true, 00:12:51.456 "unmap": true, 00:12:51.456 "flush": true, 00:12:51.456 "reset": true, 00:12:51.456 "nvme_admin": false, 00:12:51.456 "nvme_io": false, 00:12:51.456 "nvme_io_md": false, 00:12:51.456 "write_zeroes": true, 00:12:51.456 "zcopy": true, 00:12:51.456 "get_zone_info": false, 00:12:51.456 "zone_management": false, 00:12:51.456 "zone_append": false, 00:12:51.456 "compare": false, 00:12:51.456 "compare_and_write": false, 00:12:51.456 "abort": true, 00:12:51.456 "seek_hole": false, 00:12:51.456 "seek_data": false, 00:12:51.456 "copy": true, 00:12:51.456 "nvme_iov_md": false 00:12:51.456 }, 00:12:51.456 "memory_domains": [ 00:12:51.456 { 00:12:51.456 "dma_device_id": "system", 00:12:51.456 "dma_device_type": 1 00:12:51.456 }, 00:12:51.456 { 00:12:51.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.456 "dma_device_type": 2 00:12:51.456 } 00:12:51.456 ], 00:12:51.456 "driver_specific": {} 00:12:51.456 } 00:12:51.456 ] 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.456 [2024-11-27 14:12:21.873806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.456 [2024-11-27 14:12:21.874006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.456 [2024-11-27 14:12:21.874151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.456 [2024-11-27 14:12:21.876607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.456 "name": "Existed_Raid", 00:12:51.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.456 "strip_size_kb": 64, 00:12:51.456 "state": "configuring", 00:12:51.456 "raid_level": "concat", 00:12:51.456 "superblock": false, 00:12:51.456 "num_base_bdevs": 3, 00:12:51.456 "num_base_bdevs_discovered": 2, 00:12:51.456 "num_base_bdevs_operational": 3, 00:12:51.456 "base_bdevs_list": [ 00:12:51.456 { 00:12:51.456 "name": "BaseBdev1", 00:12:51.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.456 "is_configured": false, 00:12:51.456 "data_offset": 0, 00:12:51.456 "data_size": 0 00:12:51.456 }, 00:12:51.456 { 00:12:51.456 "name": "BaseBdev2", 00:12:51.456 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:51.456 "is_configured": true, 00:12:51.456 "data_offset": 0, 00:12:51.456 "data_size": 65536 00:12:51.456 }, 00:12:51.456 { 00:12:51.456 "name": "BaseBdev3", 00:12:51.456 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:51.456 "is_configured": true, 00:12:51.456 "data_offset": 0, 00:12:51.456 "data_size": 65536 00:12:51.456 } 00:12:51.456 ] 00:12:51.456 }' 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.456 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.023 [2024-11-27 14:12:22.386067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.023 "name": "Existed_Raid", 00:12:52.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.023 "strip_size_kb": 64, 00:12:52.023 "state": "configuring", 00:12:52.023 "raid_level": "concat", 00:12:52.023 "superblock": false, 00:12:52.023 "num_base_bdevs": 3, 00:12:52.023 "num_base_bdevs_discovered": 1, 00:12:52.023 "num_base_bdevs_operational": 3, 00:12:52.023 "base_bdevs_list": [ 00:12:52.023 { 00:12:52.023 "name": "BaseBdev1", 00:12:52.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.023 "is_configured": false, 00:12:52.023 "data_offset": 0, 00:12:52.023 "data_size": 0 00:12:52.023 }, 00:12:52.023 { 00:12:52.023 "name": null, 00:12:52.023 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:52.023 "is_configured": false, 00:12:52.023 "data_offset": 0, 00:12:52.023 "data_size": 65536 00:12:52.023 }, 00:12:52.023 { 00:12:52.023 "name": "BaseBdev3", 00:12:52.023 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:52.023 "is_configured": true, 00:12:52.023 "data_offset": 0, 00:12:52.023 "data_size": 65536 00:12:52.023 } 00:12:52.023 ] 00:12:52.023 }' 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.023 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 [2024-11-27 14:12:22.972571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.591 BaseBdev1 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.591 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.591 [ 00:12:52.591 { 00:12:52.591 "name": "BaseBdev1", 00:12:52.591 "aliases": [ 00:12:52.591 "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8" 00:12:52.591 ], 00:12:52.591 "product_name": "Malloc disk", 00:12:52.591 "block_size": 512, 00:12:52.591 "num_blocks": 65536, 00:12:52.591 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:52.591 "assigned_rate_limits": { 00:12:52.591 "rw_ios_per_sec": 0, 00:12:52.591 "rw_mbytes_per_sec": 0, 00:12:52.591 "r_mbytes_per_sec": 0, 00:12:52.591 "w_mbytes_per_sec": 0 00:12:52.591 }, 00:12:52.591 "claimed": true, 00:12:52.591 "claim_type": "exclusive_write", 00:12:52.591 "zoned": false, 00:12:52.591 "supported_io_types": { 00:12:52.591 "read": true, 00:12:52.591 "write": true, 00:12:52.591 "unmap": true, 00:12:52.591 "flush": true, 00:12:52.591 "reset": true, 00:12:52.591 "nvme_admin": false, 00:12:52.591 "nvme_io": false, 00:12:52.591 "nvme_io_md": false, 00:12:52.591 "write_zeroes": true, 00:12:52.591 "zcopy": true, 00:12:52.591 "get_zone_info": false, 00:12:52.591 "zone_management": false, 00:12:52.591 "zone_append": false, 00:12:52.591 "compare": false, 00:12:52.591 "compare_and_write": false, 00:12:52.591 "abort": true, 00:12:52.591 "seek_hole": false, 00:12:52.592 "seek_data": false, 00:12:52.592 "copy": true, 00:12:52.592 "nvme_iov_md": false 00:12:52.592 }, 00:12:52.592 "memory_domains": [ 00:12:52.592 { 00:12:52.592 "dma_device_id": "system", 00:12:52.592 "dma_device_type": 1 00:12:52.592 }, 00:12:52.592 { 00:12:52.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.592 "dma_device_type": 2 00:12:52.592 } 00:12:52.592 ], 00:12:52.592 "driver_specific": {} 00:12:52.592 } 00:12:52.592 ] 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.592 "name": "Existed_Raid", 00:12:52.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.592 "strip_size_kb": 64, 00:12:52.592 "state": "configuring", 00:12:52.592 "raid_level": "concat", 00:12:52.592 "superblock": false, 00:12:52.592 "num_base_bdevs": 3, 00:12:52.592 "num_base_bdevs_discovered": 2, 00:12:52.592 "num_base_bdevs_operational": 3, 00:12:52.592 "base_bdevs_list": [ 00:12:52.592 { 00:12:52.592 "name": "BaseBdev1", 00:12:52.592 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:52.592 "is_configured": true, 00:12:52.592 "data_offset": 0, 00:12:52.592 "data_size": 65536 00:12:52.592 }, 00:12:52.592 { 00:12:52.592 "name": null, 00:12:52.592 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:52.592 "is_configured": false, 00:12:52.592 "data_offset": 0, 00:12:52.592 "data_size": 65536 00:12:52.592 }, 00:12:52.592 { 00:12:52.592 "name": "BaseBdev3", 00:12:52.592 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:52.592 "is_configured": true, 00:12:52.592 "data_offset": 0, 00:12:52.592 "data_size": 65536 00:12:52.592 } 00:12:52.592 ] 00:12:52.592 }' 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.592 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.160 [2024-11-27 14:12:23.616800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.160 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.418 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.418 "name": "Existed_Raid", 00:12:53.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.418 "strip_size_kb": 64, 00:12:53.418 "state": "configuring", 00:12:53.418 "raid_level": "concat", 00:12:53.418 "superblock": false, 00:12:53.418 "num_base_bdevs": 3, 00:12:53.418 "num_base_bdevs_discovered": 1, 00:12:53.418 "num_base_bdevs_operational": 3, 00:12:53.418 "base_bdevs_list": [ 00:12:53.418 { 00:12:53.418 "name": "BaseBdev1", 00:12:53.418 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:53.418 "is_configured": true, 00:12:53.418 "data_offset": 0, 00:12:53.418 "data_size": 65536 00:12:53.418 }, 00:12:53.418 { 00:12:53.418 "name": null, 00:12:53.418 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:53.418 "is_configured": false, 00:12:53.418 "data_offset": 0, 00:12:53.418 "data_size": 65536 00:12:53.418 }, 00:12:53.418 { 00:12:53.418 "name": null, 00:12:53.418 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:53.418 "is_configured": false, 00:12:53.418 "data_offset": 0, 00:12:53.418 "data_size": 65536 00:12:53.418 } 00:12:53.418 ] 00:12:53.418 }' 00:12:53.418 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.418 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.676 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.676 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.676 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.676 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.676 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.935 [2024-11-27 14:12:24.204999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.935 "name": "Existed_Raid", 00:12:53.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.935 "strip_size_kb": 64, 00:12:53.935 "state": "configuring", 00:12:53.935 "raid_level": "concat", 00:12:53.935 "superblock": false, 00:12:53.935 "num_base_bdevs": 3, 00:12:53.935 "num_base_bdevs_discovered": 2, 00:12:53.935 "num_base_bdevs_operational": 3, 00:12:53.935 "base_bdevs_list": [ 00:12:53.935 { 00:12:53.935 "name": "BaseBdev1", 00:12:53.935 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:53.935 "is_configured": true, 00:12:53.935 "data_offset": 0, 00:12:53.935 "data_size": 65536 00:12:53.935 }, 00:12:53.935 { 00:12:53.935 "name": null, 00:12:53.935 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:53.935 "is_configured": false, 00:12:53.935 "data_offset": 0, 00:12:53.935 "data_size": 65536 00:12:53.935 }, 00:12:53.935 { 00:12:53.935 "name": "BaseBdev3", 00:12:53.935 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:53.935 "is_configured": true, 00:12:53.935 "data_offset": 0, 00:12:53.935 "data_size": 65536 00:12:53.935 } 00:12:53.935 ] 00:12:53.935 }' 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.935 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.501 [2024-11-27 14:12:24.777562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.501 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.502 "name": "Existed_Raid", 00:12:54.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.502 "strip_size_kb": 64, 00:12:54.502 "state": "configuring", 00:12:54.502 "raid_level": "concat", 00:12:54.502 "superblock": false, 00:12:54.502 "num_base_bdevs": 3, 00:12:54.502 "num_base_bdevs_discovered": 1, 00:12:54.502 "num_base_bdevs_operational": 3, 00:12:54.502 "base_bdevs_list": [ 00:12:54.502 { 00:12:54.502 "name": null, 00:12:54.502 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:54.502 "is_configured": false, 00:12:54.502 "data_offset": 0, 00:12:54.502 "data_size": 65536 00:12:54.502 }, 00:12:54.502 { 00:12:54.502 "name": null, 00:12:54.502 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:54.502 "is_configured": false, 00:12:54.502 "data_offset": 0, 00:12:54.502 "data_size": 65536 00:12:54.502 }, 00:12:54.502 { 00:12:54.502 "name": "BaseBdev3", 00:12:54.502 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:54.502 "is_configured": true, 00:12:54.502 "data_offset": 0, 00:12:54.502 "data_size": 65536 00:12:54.502 } 00:12:54.502 ] 00:12:54.502 }' 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.502 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.069 [2024-11-27 14:12:25.422843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.069 "name": "Existed_Raid", 00:12:55.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.069 "strip_size_kb": 64, 00:12:55.069 "state": "configuring", 00:12:55.069 "raid_level": "concat", 00:12:55.069 "superblock": false, 00:12:55.069 "num_base_bdevs": 3, 00:12:55.069 "num_base_bdevs_discovered": 2, 00:12:55.069 "num_base_bdevs_operational": 3, 00:12:55.069 "base_bdevs_list": [ 00:12:55.069 { 00:12:55.069 "name": null, 00:12:55.069 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:55.069 "is_configured": false, 00:12:55.069 "data_offset": 0, 00:12:55.069 "data_size": 65536 00:12:55.069 }, 00:12:55.069 { 00:12:55.069 "name": "BaseBdev2", 00:12:55.069 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:55.069 "is_configured": true, 00:12:55.069 "data_offset": 0, 00:12:55.069 "data_size": 65536 00:12:55.069 }, 00:12:55.069 { 00:12:55.069 "name": "BaseBdev3", 00:12:55.069 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:55.069 "is_configured": true, 00:12:55.069 "data_offset": 0, 00:12:55.069 "data_size": 65536 00:12:55.069 } 00:12:55.069 ] 00:12:55.069 }' 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.069 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.637 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.637 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.637 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.637 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.637 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a5b79c0-7c86-4b77-974d-710dbf8cc6e8 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.637 [2024-11-27 14:12:26.134560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:55.637 [2024-11-27 14:12:26.134808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:55.637 [2024-11-27 14:12:26.134910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:55.637 [2024-11-27 14:12:26.135421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:55.637 [2024-11-27 14:12:26.135676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:55.637 [2024-11-27 14:12:26.135697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:55.637 NewBaseBdev 00:12:55.637 [2024-11-27 14:12:26.136089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:55.637 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.895 [ 00:12:55.895 { 00:12:55.895 "name": "NewBaseBdev", 00:12:55.895 "aliases": [ 00:12:55.895 "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8" 00:12:55.895 ], 00:12:55.895 "product_name": "Malloc disk", 00:12:55.895 "block_size": 512, 00:12:55.895 "num_blocks": 65536, 00:12:55.895 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:55.895 "assigned_rate_limits": { 00:12:55.895 "rw_ios_per_sec": 0, 00:12:55.895 "rw_mbytes_per_sec": 0, 00:12:55.895 "r_mbytes_per_sec": 0, 00:12:55.895 "w_mbytes_per_sec": 0 00:12:55.895 }, 00:12:55.895 "claimed": true, 00:12:55.895 "claim_type": "exclusive_write", 00:12:55.895 "zoned": false, 00:12:55.895 "supported_io_types": { 00:12:55.895 "read": true, 00:12:55.895 "write": true, 00:12:55.895 "unmap": true, 00:12:55.895 "flush": true, 00:12:55.895 "reset": true, 00:12:55.895 "nvme_admin": false, 00:12:55.895 "nvme_io": false, 00:12:55.895 "nvme_io_md": false, 00:12:55.895 "write_zeroes": true, 00:12:55.895 "zcopy": true, 00:12:55.895 "get_zone_info": false, 00:12:55.895 "zone_management": false, 00:12:55.895 "zone_append": false, 00:12:55.895 "compare": false, 00:12:55.895 "compare_and_write": false, 00:12:55.895 "abort": true, 00:12:55.895 "seek_hole": false, 00:12:55.895 "seek_data": false, 00:12:55.895 "copy": true, 00:12:55.895 "nvme_iov_md": false 00:12:55.895 }, 00:12:55.895 "memory_domains": [ 00:12:55.895 { 00:12:55.895 "dma_device_id": "system", 00:12:55.895 "dma_device_type": 1 00:12:55.895 }, 00:12:55.895 { 00:12:55.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.895 "dma_device_type": 2 00:12:55.895 } 00:12:55.895 ], 00:12:55.895 "driver_specific": {} 00:12:55.895 } 00:12:55.895 ] 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.895 "name": "Existed_Raid", 00:12:55.895 "uuid": "c5d0489a-9764-4919-82aa-8c87e9c72df6", 00:12:55.895 "strip_size_kb": 64, 00:12:55.895 "state": "online", 00:12:55.895 "raid_level": "concat", 00:12:55.895 "superblock": false, 00:12:55.895 "num_base_bdevs": 3, 00:12:55.895 "num_base_bdevs_discovered": 3, 00:12:55.895 "num_base_bdevs_operational": 3, 00:12:55.895 "base_bdevs_list": [ 00:12:55.895 { 00:12:55.895 "name": "NewBaseBdev", 00:12:55.895 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:55.895 "is_configured": true, 00:12:55.895 "data_offset": 0, 00:12:55.895 "data_size": 65536 00:12:55.895 }, 00:12:55.895 { 00:12:55.895 "name": "BaseBdev2", 00:12:55.895 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:55.895 "is_configured": true, 00:12:55.895 "data_offset": 0, 00:12:55.895 "data_size": 65536 00:12:55.895 }, 00:12:55.895 { 00:12:55.895 "name": "BaseBdev3", 00:12:55.895 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:55.895 "is_configured": true, 00:12:55.895 "data_offset": 0, 00:12:55.895 "data_size": 65536 00:12:55.895 } 00:12:55.895 ] 00:12:55.895 }' 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.895 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 [2024-11-27 14:12:26.675158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.463 "name": "Existed_Raid", 00:12:56.463 "aliases": [ 00:12:56.463 "c5d0489a-9764-4919-82aa-8c87e9c72df6" 00:12:56.463 ], 00:12:56.463 "product_name": "Raid Volume", 00:12:56.463 "block_size": 512, 00:12:56.463 "num_blocks": 196608, 00:12:56.463 "uuid": "c5d0489a-9764-4919-82aa-8c87e9c72df6", 00:12:56.463 "assigned_rate_limits": { 00:12:56.463 "rw_ios_per_sec": 0, 00:12:56.463 "rw_mbytes_per_sec": 0, 00:12:56.463 "r_mbytes_per_sec": 0, 00:12:56.463 "w_mbytes_per_sec": 0 00:12:56.463 }, 00:12:56.463 "claimed": false, 00:12:56.463 "zoned": false, 00:12:56.463 "supported_io_types": { 00:12:56.463 "read": true, 00:12:56.463 "write": true, 00:12:56.463 "unmap": true, 00:12:56.463 "flush": true, 00:12:56.463 "reset": true, 00:12:56.463 "nvme_admin": false, 00:12:56.463 "nvme_io": false, 00:12:56.463 "nvme_io_md": false, 00:12:56.463 "write_zeroes": true, 00:12:56.463 "zcopy": false, 00:12:56.463 "get_zone_info": false, 00:12:56.463 "zone_management": false, 00:12:56.463 "zone_append": false, 00:12:56.463 "compare": false, 00:12:56.463 "compare_and_write": false, 00:12:56.463 "abort": false, 00:12:56.463 "seek_hole": false, 00:12:56.463 "seek_data": false, 00:12:56.463 "copy": false, 00:12:56.463 "nvme_iov_md": false 00:12:56.463 }, 00:12:56.463 "memory_domains": [ 00:12:56.463 { 00:12:56.463 "dma_device_id": "system", 00:12:56.463 "dma_device_type": 1 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.463 "dma_device_type": 2 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "dma_device_id": "system", 00:12:56.463 "dma_device_type": 1 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.463 "dma_device_type": 2 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "dma_device_id": "system", 00:12:56.463 "dma_device_type": 1 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.463 "dma_device_type": 2 00:12:56.463 } 00:12:56.463 ], 00:12:56.463 "driver_specific": { 00:12:56.463 "raid": { 00:12:56.463 "uuid": "c5d0489a-9764-4919-82aa-8c87e9c72df6", 00:12:56.463 "strip_size_kb": 64, 00:12:56.463 "state": "online", 00:12:56.463 "raid_level": "concat", 00:12:56.463 "superblock": false, 00:12:56.463 "num_base_bdevs": 3, 00:12:56.463 "num_base_bdevs_discovered": 3, 00:12:56.463 "num_base_bdevs_operational": 3, 00:12:56.463 "base_bdevs_list": [ 00:12:56.463 { 00:12:56.463 "name": "NewBaseBdev", 00:12:56.463 "uuid": "9a5b79c0-7c86-4b77-974d-710dbf8cc6e8", 00:12:56.463 "is_configured": true, 00:12:56.463 "data_offset": 0, 00:12:56.463 "data_size": 65536 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "name": "BaseBdev2", 00:12:56.463 "uuid": "650950f0-689c-457e-9087-a8a29abbaa43", 00:12:56.463 "is_configured": true, 00:12:56.463 "data_offset": 0, 00:12:56.463 "data_size": 65536 00:12:56.463 }, 00:12:56.463 { 00:12:56.463 "name": "BaseBdev3", 00:12:56.463 "uuid": "99a8cb70-1f00-4df0-a0e4-71de8f985299", 00:12:56.463 "is_configured": true, 00:12:56.463 "data_offset": 0, 00:12:56.463 "data_size": 65536 00:12:56.463 } 00:12:56.463 ] 00:12:56.463 } 00:12:56.463 } 00:12:56.463 }' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:56.463 BaseBdev2 00:12:56.463 BaseBdev3' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.723 [2024-11-27 14:12:26.978832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.723 [2024-11-27 14:12:26.978979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.723 [2024-11-27 14:12:26.979179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.723 [2024-11-27 14:12:26.979353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.723 [2024-11-27 14:12:26.979471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65802 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65802 ']' 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65802 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.723 14:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65802 00:12:56.723 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.723 killing process with pid 65802 00:12:56.723 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.723 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65802' 00:12:56.723 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65802 00:12:56.723 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65802 00:12:56.723 [2024-11-27 14:12:27.020672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.981 [2024-11-27 14:12:27.296918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:57.918 00:12:57.918 real 0m11.819s 00:12:57.918 user 0m19.698s 00:12:57.918 sys 0m1.539s 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.918 ************************************ 00:12:57.918 END TEST raid_state_function_test 00:12:57.918 ************************************ 00:12:57.918 14:12:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:57.918 14:12:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:57.918 14:12:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.918 14:12:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.918 ************************************ 00:12:57.918 START TEST raid_state_function_test_sb 00:12:57.918 ************************************ 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66434 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:57.918 Process raid pid: 66434 00:12:57.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66434' 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66434 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66434 ']' 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.918 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.176 [2024-11-27 14:12:28.530518] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:12:58.176 [2024-11-27 14:12:28.530688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.433 [2024-11-27 14:12:28.713454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.433 [2024-11-27 14:12:28.844527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.690 [2024-11-27 14:12:29.050354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.690 [2024-11-27 14:12:29.050406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.254 [2024-11-27 14:12:29.514987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.254 [2024-11-27 14:12:29.515455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.254 [2024-11-27 14:12:29.515583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.254 [2024-11-27 14:12:29.515646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.254 [2024-11-27 14:12:29.515775] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.254 [2024-11-27 14:12:29.515852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.254 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.254 "name": "Existed_Raid", 00:12:59.254 "uuid": "9218a0ae-7fb4-41a8-9997-d8137dac2ef0", 00:12:59.254 "strip_size_kb": 64, 00:12:59.255 "state": "configuring", 00:12:59.255 "raid_level": "concat", 00:12:59.255 "superblock": true, 00:12:59.255 "num_base_bdevs": 3, 00:12:59.255 "num_base_bdevs_discovered": 0, 00:12:59.255 "num_base_bdevs_operational": 3, 00:12:59.255 "base_bdevs_list": [ 00:12:59.255 { 00:12:59.255 "name": "BaseBdev1", 00:12:59.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.255 "is_configured": false, 00:12:59.255 "data_offset": 0, 00:12:59.255 "data_size": 0 00:12:59.255 }, 00:12:59.255 { 00:12:59.255 "name": "BaseBdev2", 00:12:59.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.255 "is_configured": false, 00:12:59.255 "data_offset": 0, 00:12:59.255 "data_size": 0 00:12:59.255 }, 00:12:59.255 { 00:12:59.255 "name": "BaseBdev3", 00:12:59.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.255 "is_configured": false, 00:12:59.255 "data_offset": 0, 00:12:59.255 "data_size": 0 00:12:59.255 } 00:12:59.255 ] 00:12:59.255 }' 00:12:59.255 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.255 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.818 [2024-11-27 14:12:30.031044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.818 [2024-11-27 14:12:30.031218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.818 [2024-11-27 14:12:30.039038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.818 [2024-11-27 14:12:30.039210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.818 [2024-11-27 14:12:30.039329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.818 [2024-11-27 14:12:30.039389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.818 [2024-11-27 14:12:30.039654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.818 [2024-11-27 14:12:30.039716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.818 [2024-11-27 14:12:30.084035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.818 BaseBdev1 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.818 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.818 [ 00:12:59.818 { 00:12:59.818 "name": "BaseBdev1", 00:12:59.818 "aliases": [ 00:12:59.818 "41ddcdff-bced-46fb-971c-825ef8e8e112" 00:12:59.818 ], 00:12:59.818 "product_name": "Malloc disk", 00:12:59.818 "block_size": 512, 00:12:59.818 "num_blocks": 65536, 00:12:59.818 "uuid": "41ddcdff-bced-46fb-971c-825ef8e8e112", 00:12:59.818 "assigned_rate_limits": { 00:12:59.818 "rw_ios_per_sec": 0, 00:12:59.818 "rw_mbytes_per_sec": 0, 00:12:59.818 "r_mbytes_per_sec": 0, 00:12:59.818 "w_mbytes_per_sec": 0 00:12:59.818 }, 00:12:59.818 "claimed": true, 00:12:59.818 "claim_type": "exclusive_write", 00:12:59.818 "zoned": false, 00:12:59.818 "supported_io_types": { 00:12:59.818 "read": true, 00:12:59.818 "write": true, 00:12:59.818 "unmap": true, 00:12:59.818 "flush": true, 00:12:59.818 "reset": true, 00:12:59.818 "nvme_admin": false, 00:12:59.818 "nvme_io": false, 00:12:59.818 "nvme_io_md": false, 00:12:59.818 "write_zeroes": true, 00:12:59.818 "zcopy": true, 00:12:59.818 "get_zone_info": false, 00:12:59.818 "zone_management": false, 00:12:59.818 "zone_append": false, 00:12:59.818 "compare": false, 00:12:59.819 "compare_and_write": false, 00:12:59.819 "abort": true, 00:12:59.819 "seek_hole": false, 00:12:59.819 "seek_data": false, 00:12:59.819 "copy": true, 00:12:59.819 "nvme_iov_md": false 00:12:59.819 }, 00:12:59.819 "memory_domains": [ 00:12:59.819 { 00:12:59.819 "dma_device_id": "system", 00:12:59.819 "dma_device_type": 1 00:12:59.819 }, 00:12:59.819 { 00:12:59.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.819 "dma_device_type": 2 00:12:59.819 } 00:12:59.819 ], 00:12:59.819 "driver_specific": {} 00:12:59.819 } 00:12:59.819 ] 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.819 "name": "Existed_Raid", 00:12:59.819 "uuid": "db12c7ef-d1bf-4210-beeb-346250a08350", 00:12:59.819 "strip_size_kb": 64, 00:12:59.819 "state": "configuring", 00:12:59.819 "raid_level": "concat", 00:12:59.819 "superblock": true, 00:12:59.819 "num_base_bdevs": 3, 00:12:59.819 "num_base_bdevs_discovered": 1, 00:12:59.819 "num_base_bdevs_operational": 3, 00:12:59.819 "base_bdevs_list": [ 00:12:59.819 { 00:12:59.819 "name": "BaseBdev1", 00:12:59.819 "uuid": "41ddcdff-bced-46fb-971c-825ef8e8e112", 00:12:59.819 "is_configured": true, 00:12:59.819 "data_offset": 2048, 00:12:59.819 "data_size": 63488 00:12:59.819 }, 00:12:59.819 { 00:12:59.819 "name": "BaseBdev2", 00:12:59.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.819 "is_configured": false, 00:12:59.819 "data_offset": 0, 00:12:59.819 "data_size": 0 00:12:59.819 }, 00:12:59.819 { 00:12:59.819 "name": "BaseBdev3", 00:12:59.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.819 "is_configured": false, 00:12:59.819 "data_offset": 0, 00:12:59.819 "data_size": 0 00:12:59.819 } 00:12:59.819 ] 00:12:59.819 }' 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.819 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.384 [2024-11-27 14:12:30.640231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.384 [2024-11-27 14:12:30.640434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.384 [2024-11-27 14:12:30.648298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.384 [2024-11-27 14:12:30.650973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.384 [2024-11-27 14:12:30.651147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.384 [2024-11-27 14:12:30.651175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.384 [2024-11-27 14:12:30.651193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.384 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.384 "name": "Existed_Raid", 00:13:00.384 "uuid": "85b2d1df-8d87-4301-8dff-877ab8b0a57e", 00:13:00.384 "strip_size_kb": 64, 00:13:00.384 "state": "configuring", 00:13:00.384 "raid_level": "concat", 00:13:00.384 "superblock": true, 00:13:00.384 "num_base_bdevs": 3, 00:13:00.384 "num_base_bdevs_discovered": 1, 00:13:00.384 "num_base_bdevs_operational": 3, 00:13:00.384 "base_bdevs_list": [ 00:13:00.384 { 00:13:00.384 "name": "BaseBdev1", 00:13:00.384 "uuid": "41ddcdff-bced-46fb-971c-825ef8e8e112", 00:13:00.385 "is_configured": true, 00:13:00.385 "data_offset": 2048, 00:13:00.385 "data_size": 63488 00:13:00.385 }, 00:13:00.385 { 00:13:00.385 "name": "BaseBdev2", 00:13:00.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.385 "is_configured": false, 00:13:00.385 "data_offset": 0, 00:13:00.385 "data_size": 0 00:13:00.385 }, 00:13:00.385 { 00:13:00.385 "name": "BaseBdev3", 00:13:00.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.385 "is_configured": false, 00:13:00.385 "data_offset": 0, 00:13:00.385 "data_size": 0 00:13:00.385 } 00:13:00.385 ] 00:13:00.385 }' 00:13:00.385 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.385 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.952 [2024-11-27 14:12:31.250813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.952 BaseBdev2 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.952 [ 00:13:00.952 { 00:13:00.952 "name": "BaseBdev2", 00:13:00.952 "aliases": [ 00:13:00.952 "9c56e10b-26b8-4413-8ae3-9b4a4f86aac0" 00:13:00.952 ], 00:13:00.952 "product_name": "Malloc disk", 00:13:00.952 "block_size": 512, 00:13:00.952 "num_blocks": 65536, 00:13:00.952 "uuid": "9c56e10b-26b8-4413-8ae3-9b4a4f86aac0", 00:13:00.952 "assigned_rate_limits": { 00:13:00.952 "rw_ios_per_sec": 0, 00:13:00.952 "rw_mbytes_per_sec": 0, 00:13:00.952 "r_mbytes_per_sec": 0, 00:13:00.952 "w_mbytes_per_sec": 0 00:13:00.952 }, 00:13:00.952 "claimed": true, 00:13:00.952 "claim_type": "exclusive_write", 00:13:00.952 "zoned": false, 00:13:00.952 "supported_io_types": { 00:13:00.952 "read": true, 00:13:00.952 "write": true, 00:13:00.952 "unmap": true, 00:13:00.952 "flush": true, 00:13:00.952 "reset": true, 00:13:00.952 "nvme_admin": false, 00:13:00.952 "nvme_io": false, 00:13:00.952 "nvme_io_md": false, 00:13:00.952 "write_zeroes": true, 00:13:00.952 "zcopy": true, 00:13:00.952 "get_zone_info": false, 00:13:00.952 "zone_management": false, 00:13:00.952 "zone_append": false, 00:13:00.952 "compare": false, 00:13:00.952 "compare_and_write": false, 00:13:00.952 "abort": true, 00:13:00.952 "seek_hole": false, 00:13:00.952 "seek_data": false, 00:13:00.952 "copy": true, 00:13:00.952 "nvme_iov_md": false 00:13:00.952 }, 00:13:00.952 "memory_domains": [ 00:13:00.952 { 00:13:00.952 "dma_device_id": "system", 00:13:00.952 "dma_device_type": 1 00:13:00.952 }, 00:13:00.952 { 00:13:00.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.952 "dma_device_type": 2 00:13:00.952 } 00:13:00.952 ], 00:13:00.952 "driver_specific": {} 00:13:00.952 } 00:13:00.952 ] 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.952 "name": "Existed_Raid", 00:13:00.952 "uuid": "85b2d1df-8d87-4301-8dff-877ab8b0a57e", 00:13:00.952 "strip_size_kb": 64, 00:13:00.952 "state": "configuring", 00:13:00.952 "raid_level": "concat", 00:13:00.952 "superblock": true, 00:13:00.952 "num_base_bdevs": 3, 00:13:00.952 "num_base_bdevs_discovered": 2, 00:13:00.952 "num_base_bdevs_operational": 3, 00:13:00.952 "base_bdevs_list": [ 00:13:00.952 { 00:13:00.952 "name": "BaseBdev1", 00:13:00.952 "uuid": "41ddcdff-bced-46fb-971c-825ef8e8e112", 00:13:00.952 "is_configured": true, 00:13:00.952 "data_offset": 2048, 00:13:00.952 "data_size": 63488 00:13:00.952 }, 00:13:00.952 { 00:13:00.952 "name": "BaseBdev2", 00:13:00.952 "uuid": "9c56e10b-26b8-4413-8ae3-9b4a4f86aac0", 00:13:00.952 "is_configured": true, 00:13:00.952 "data_offset": 2048, 00:13:00.952 "data_size": 63488 00:13:00.952 }, 00:13:00.952 { 00:13:00.952 "name": "BaseBdev3", 00:13:00.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.952 "is_configured": false, 00:13:00.952 "data_offset": 0, 00:13:00.952 "data_size": 0 00:13:00.952 } 00:13:00.952 ] 00:13:00.952 }' 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.952 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.519 [2024-11-27 14:12:31.857275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.519 [2024-11-27 14:12:31.857615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.519 [2024-11-27 14:12:31.857646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:01.519 [2024-11-27 14:12:31.858032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:01.519 BaseBdev3 00:13:01.519 [2024-11-27 14:12:31.858264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.519 [2024-11-27 14:12:31.858289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:01.519 [2024-11-27 14:12:31.858494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.519 [ 00:13:01.519 { 00:13:01.519 "name": "BaseBdev3", 00:13:01.519 "aliases": [ 00:13:01.519 "5e59dfc5-7951-454a-8a89-487ac95ba699" 00:13:01.519 ], 00:13:01.519 "product_name": "Malloc disk", 00:13:01.519 "block_size": 512, 00:13:01.519 "num_blocks": 65536, 00:13:01.519 "uuid": "5e59dfc5-7951-454a-8a89-487ac95ba699", 00:13:01.519 "assigned_rate_limits": { 00:13:01.519 "rw_ios_per_sec": 0, 00:13:01.519 "rw_mbytes_per_sec": 0, 00:13:01.519 "r_mbytes_per_sec": 0, 00:13:01.519 "w_mbytes_per_sec": 0 00:13:01.519 }, 00:13:01.519 "claimed": true, 00:13:01.519 "claim_type": "exclusive_write", 00:13:01.519 "zoned": false, 00:13:01.519 "supported_io_types": { 00:13:01.519 "read": true, 00:13:01.519 "write": true, 00:13:01.519 "unmap": true, 00:13:01.519 "flush": true, 00:13:01.519 "reset": true, 00:13:01.519 "nvme_admin": false, 00:13:01.519 "nvme_io": false, 00:13:01.519 "nvme_io_md": false, 00:13:01.519 "write_zeroes": true, 00:13:01.519 "zcopy": true, 00:13:01.519 "get_zone_info": false, 00:13:01.519 "zone_management": false, 00:13:01.519 "zone_append": false, 00:13:01.519 "compare": false, 00:13:01.519 "compare_and_write": false, 00:13:01.519 "abort": true, 00:13:01.519 "seek_hole": false, 00:13:01.519 "seek_data": false, 00:13:01.519 "copy": true, 00:13:01.519 "nvme_iov_md": false 00:13:01.519 }, 00:13:01.519 "memory_domains": [ 00:13:01.519 { 00:13:01.519 "dma_device_id": "system", 00:13:01.519 "dma_device_type": 1 00:13:01.519 }, 00:13:01.519 { 00:13:01.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.519 "dma_device_type": 2 00:13:01.519 } 00:13:01.519 ], 00:13:01.519 "driver_specific": {} 00:13:01.519 } 00:13:01.519 ] 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.519 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.520 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.520 "name": "Existed_Raid", 00:13:01.520 "uuid": "85b2d1df-8d87-4301-8dff-877ab8b0a57e", 00:13:01.520 "strip_size_kb": 64, 00:13:01.520 "state": "online", 00:13:01.520 "raid_level": "concat", 00:13:01.520 "superblock": true, 00:13:01.520 "num_base_bdevs": 3, 00:13:01.520 "num_base_bdevs_discovered": 3, 00:13:01.520 "num_base_bdevs_operational": 3, 00:13:01.520 "base_bdevs_list": [ 00:13:01.520 { 00:13:01.520 "name": "BaseBdev1", 00:13:01.520 "uuid": "41ddcdff-bced-46fb-971c-825ef8e8e112", 00:13:01.520 "is_configured": true, 00:13:01.520 "data_offset": 2048, 00:13:01.520 "data_size": 63488 00:13:01.520 }, 00:13:01.520 { 00:13:01.520 "name": "BaseBdev2", 00:13:01.520 "uuid": "9c56e10b-26b8-4413-8ae3-9b4a4f86aac0", 00:13:01.520 "is_configured": true, 00:13:01.520 "data_offset": 2048, 00:13:01.520 "data_size": 63488 00:13:01.520 }, 00:13:01.520 { 00:13:01.520 "name": "BaseBdev3", 00:13:01.520 "uuid": "5e59dfc5-7951-454a-8a89-487ac95ba699", 00:13:01.520 "is_configured": true, 00:13:01.520 "data_offset": 2048, 00:13:01.520 "data_size": 63488 00:13:01.520 } 00:13:01.520 ] 00:13:01.520 }' 00:13:01.520 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.520 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.087 [2024-11-27 14:12:32.401877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.087 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.087 "name": "Existed_Raid", 00:13:02.087 "aliases": [ 00:13:02.087 "85b2d1df-8d87-4301-8dff-877ab8b0a57e" 00:13:02.087 ], 00:13:02.087 "product_name": "Raid Volume", 00:13:02.087 "block_size": 512, 00:13:02.087 "num_blocks": 190464, 00:13:02.087 "uuid": "85b2d1df-8d87-4301-8dff-877ab8b0a57e", 00:13:02.087 "assigned_rate_limits": { 00:13:02.087 "rw_ios_per_sec": 0, 00:13:02.087 "rw_mbytes_per_sec": 0, 00:13:02.087 "r_mbytes_per_sec": 0, 00:13:02.087 "w_mbytes_per_sec": 0 00:13:02.087 }, 00:13:02.087 "claimed": false, 00:13:02.087 "zoned": false, 00:13:02.087 "supported_io_types": { 00:13:02.087 "read": true, 00:13:02.087 "write": true, 00:13:02.087 "unmap": true, 00:13:02.087 "flush": true, 00:13:02.087 "reset": true, 00:13:02.087 "nvme_admin": false, 00:13:02.087 "nvme_io": false, 00:13:02.087 "nvme_io_md": false, 00:13:02.087 "write_zeroes": true, 00:13:02.087 "zcopy": false, 00:13:02.087 "get_zone_info": false, 00:13:02.087 "zone_management": false, 00:13:02.087 "zone_append": false, 00:13:02.087 "compare": false, 00:13:02.087 "compare_and_write": false, 00:13:02.087 "abort": false, 00:13:02.087 "seek_hole": false, 00:13:02.087 "seek_data": false, 00:13:02.087 "copy": false, 00:13:02.087 "nvme_iov_md": false 00:13:02.087 }, 00:13:02.087 "memory_domains": [ 00:13:02.087 { 00:13:02.087 "dma_device_id": "system", 00:13:02.087 "dma_device_type": 1 00:13:02.087 }, 00:13:02.087 { 00:13:02.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.087 "dma_device_type": 2 00:13:02.087 }, 00:13:02.087 { 00:13:02.087 "dma_device_id": "system", 00:13:02.087 "dma_device_type": 1 00:13:02.087 }, 00:13:02.087 { 00:13:02.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.087 "dma_device_type": 2 00:13:02.087 }, 00:13:02.087 { 00:13:02.087 "dma_device_id": "system", 00:13:02.087 "dma_device_type": 1 00:13:02.087 }, 00:13:02.087 { 00:13:02.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.088 "dma_device_type": 2 00:13:02.088 } 00:13:02.088 ], 00:13:02.088 "driver_specific": { 00:13:02.088 "raid": { 00:13:02.088 "uuid": "85b2d1df-8d87-4301-8dff-877ab8b0a57e", 00:13:02.088 "strip_size_kb": 64, 00:13:02.088 "state": "online", 00:13:02.088 "raid_level": "concat", 00:13:02.088 "superblock": true, 00:13:02.088 "num_base_bdevs": 3, 00:13:02.088 "num_base_bdevs_discovered": 3, 00:13:02.088 "num_base_bdevs_operational": 3, 00:13:02.088 "base_bdevs_list": [ 00:13:02.088 { 00:13:02.088 "name": "BaseBdev1", 00:13:02.088 "uuid": "41ddcdff-bced-46fb-971c-825ef8e8e112", 00:13:02.088 "is_configured": true, 00:13:02.088 "data_offset": 2048, 00:13:02.088 "data_size": 63488 00:13:02.088 }, 00:13:02.088 { 00:13:02.088 "name": "BaseBdev2", 00:13:02.088 "uuid": "9c56e10b-26b8-4413-8ae3-9b4a4f86aac0", 00:13:02.088 "is_configured": true, 00:13:02.088 "data_offset": 2048, 00:13:02.088 "data_size": 63488 00:13:02.088 }, 00:13:02.088 { 00:13:02.088 "name": "BaseBdev3", 00:13:02.088 "uuid": "5e59dfc5-7951-454a-8a89-487ac95ba699", 00:13:02.088 "is_configured": true, 00:13:02.088 "data_offset": 2048, 00:13:02.088 "data_size": 63488 00:13:02.088 } 00:13:02.088 ] 00:13:02.088 } 00:13:02.088 } 00:13:02.088 }' 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:02.088 BaseBdev2 00:13:02.088 BaseBdev3' 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.088 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.346 [2024-11-27 14:12:32.721589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.346 [2024-11-27 14:12:32.721626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.346 [2024-11-27 14:12:32.721695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:02.346 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.347 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.605 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.605 "name": "Existed_Raid", 00:13:02.605 "uuid": "85b2d1df-8d87-4301-8dff-877ab8b0a57e", 00:13:02.605 "strip_size_kb": 64, 00:13:02.605 "state": "offline", 00:13:02.605 "raid_level": "concat", 00:13:02.605 "superblock": true, 00:13:02.605 "num_base_bdevs": 3, 00:13:02.605 "num_base_bdevs_discovered": 2, 00:13:02.605 "num_base_bdevs_operational": 2, 00:13:02.605 "base_bdevs_list": [ 00:13:02.605 { 00:13:02.605 "name": null, 00:13:02.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.605 "is_configured": false, 00:13:02.605 "data_offset": 0, 00:13:02.605 "data_size": 63488 00:13:02.605 }, 00:13:02.605 { 00:13:02.605 "name": "BaseBdev2", 00:13:02.605 "uuid": "9c56e10b-26b8-4413-8ae3-9b4a4f86aac0", 00:13:02.605 "is_configured": true, 00:13:02.605 "data_offset": 2048, 00:13:02.605 "data_size": 63488 00:13:02.605 }, 00:13:02.605 { 00:13:02.605 "name": "BaseBdev3", 00:13:02.605 "uuid": "5e59dfc5-7951-454a-8a89-487ac95ba699", 00:13:02.605 "is_configured": true, 00:13:02.605 "data_offset": 2048, 00:13:02.605 "data_size": 63488 00:13:02.605 } 00:13:02.605 ] 00:13:02.605 }' 00:13:02.605 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.605 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.864 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.122 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.123 [2024-11-27 14:12:33.384528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.123 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.123 [2024-11-27 14:12:33.552357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:03.123 [2024-11-27 14:12:33.552452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:03.381 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.381 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 BaseBdev2 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 [ 00:13:03.382 { 00:13:03.382 "name": "BaseBdev2", 00:13:03.382 "aliases": [ 00:13:03.382 "0b4931d9-588c-4193-97ff-20f72e5b6620" 00:13:03.382 ], 00:13:03.382 "product_name": "Malloc disk", 00:13:03.382 "block_size": 512, 00:13:03.382 "num_blocks": 65536, 00:13:03.382 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:03.382 "assigned_rate_limits": { 00:13:03.382 "rw_ios_per_sec": 0, 00:13:03.382 "rw_mbytes_per_sec": 0, 00:13:03.382 "r_mbytes_per_sec": 0, 00:13:03.382 "w_mbytes_per_sec": 0 00:13:03.382 }, 00:13:03.382 "claimed": false, 00:13:03.382 "zoned": false, 00:13:03.382 "supported_io_types": { 00:13:03.382 "read": true, 00:13:03.382 "write": true, 00:13:03.382 "unmap": true, 00:13:03.382 "flush": true, 00:13:03.382 "reset": true, 00:13:03.382 "nvme_admin": false, 00:13:03.382 "nvme_io": false, 00:13:03.382 "nvme_io_md": false, 00:13:03.382 "write_zeroes": true, 00:13:03.382 "zcopy": true, 00:13:03.382 "get_zone_info": false, 00:13:03.382 "zone_management": false, 00:13:03.382 "zone_append": false, 00:13:03.382 "compare": false, 00:13:03.382 "compare_and_write": false, 00:13:03.382 "abort": true, 00:13:03.382 "seek_hole": false, 00:13:03.382 "seek_data": false, 00:13:03.382 "copy": true, 00:13:03.382 "nvme_iov_md": false 00:13:03.382 }, 00:13:03.382 "memory_domains": [ 00:13:03.382 { 00:13:03.382 "dma_device_id": "system", 00:13:03.382 "dma_device_type": 1 00:13:03.382 }, 00:13:03.382 { 00:13:03.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.382 "dma_device_type": 2 00:13:03.382 } 00:13:03.382 ], 00:13:03.382 "driver_specific": {} 00:13:03.382 } 00:13:03.382 ] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 BaseBdev3 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 [ 00:13:03.382 { 00:13:03.382 "name": "BaseBdev3", 00:13:03.382 "aliases": [ 00:13:03.382 "34656d1b-e5f6-40e9-8d24-dda7556e2d45" 00:13:03.382 ], 00:13:03.382 "product_name": "Malloc disk", 00:13:03.382 "block_size": 512, 00:13:03.382 "num_blocks": 65536, 00:13:03.382 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:03.382 "assigned_rate_limits": { 00:13:03.382 "rw_ios_per_sec": 0, 00:13:03.382 "rw_mbytes_per_sec": 0, 00:13:03.382 "r_mbytes_per_sec": 0, 00:13:03.382 "w_mbytes_per_sec": 0 00:13:03.382 }, 00:13:03.382 "claimed": false, 00:13:03.382 "zoned": false, 00:13:03.382 "supported_io_types": { 00:13:03.382 "read": true, 00:13:03.382 "write": true, 00:13:03.382 "unmap": true, 00:13:03.382 "flush": true, 00:13:03.382 "reset": true, 00:13:03.382 "nvme_admin": false, 00:13:03.382 "nvme_io": false, 00:13:03.382 "nvme_io_md": false, 00:13:03.382 "write_zeroes": true, 00:13:03.382 "zcopy": true, 00:13:03.382 "get_zone_info": false, 00:13:03.382 "zone_management": false, 00:13:03.382 "zone_append": false, 00:13:03.382 "compare": false, 00:13:03.382 "compare_and_write": false, 00:13:03.382 "abort": true, 00:13:03.382 "seek_hole": false, 00:13:03.382 "seek_data": false, 00:13:03.382 "copy": true, 00:13:03.382 "nvme_iov_md": false 00:13:03.382 }, 00:13:03.382 "memory_domains": [ 00:13:03.382 { 00:13:03.382 "dma_device_id": "system", 00:13:03.382 "dma_device_type": 1 00:13:03.382 }, 00:13:03.382 { 00:13:03.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.382 "dma_device_type": 2 00:13:03.382 } 00:13:03.382 ], 00:13:03.382 "driver_specific": {} 00:13:03.382 } 00:13:03.382 ] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.382 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.382 [2024-11-27 14:12:33.886092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.382 [2024-11-27 14:12:33.886636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.382 [2024-11-27 14:12:33.886693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.382 [2024-11-27 14:12:33.889260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.641 "name": "Existed_Raid", 00:13:03.641 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:03.641 "strip_size_kb": 64, 00:13:03.641 "state": "configuring", 00:13:03.641 "raid_level": "concat", 00:13:03.641 "superblock": true, 00:13:03.641 "num_base_bdevs": 3, 00:13:03.641 "num_base_bdevs_discovered": 2, 00:13:03.641 "num_base_bdevs_operational": 3, 00:13:03.641 "base_bdevs_list": [ 00:13:03.641 { 00:13:03.641 "name": "BaseBdev1", 00:13:03.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.641 "is_configured": false, 00:13:03.641 "data_offset": 0, 00:13:03.641 "data_size": 0 00:13:03.641 }, 00:13:03.641 { 00:13:03.641 "name": "BaseBdev2", 00:13:03.641 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:03.641 "is_configured": true, 00:13:03.641 "data_offset": 2048, 00:13:03.641 "data_size": 63488 00:13:03.641 }, 00:13:03.641 { 00:13:03.641 "name": "BaseBdev3", 00:13:03.641 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:03.641 "is_configured": true, 00:13:03.641 "data_offset": 2048, 00:13:03.641 "data_size": 63488 00:13:03.641 } 00:13:03.641 ] 00:13:03.641 }' 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.641 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.900 [2024-11-27 14:12:34.374206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.900 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.159 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.159 "name": "Existed_Raid", 00:13:04.159 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:04.159 "strip_size_kb": 64, 00:13:04.159 "state": "configuring", 00:13:04.159 "raid_level": "concat", 00:13:04.159 "superblock": true, 00:13:04.159 "num_base_bdevs": 3, 00:13:04.159 "num_base_bdevs_discovered": 1, 00:13:04.159 "num_base_bdevs_operational": 3, 00:13:04.159 "base_bdevs_list": [ 00:13:04.159 { 00:13:04.159 "name": "BaseBdev1", 00:13:04.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.159 "is_configured": false, 00:13:04.159 "data_offset": 0, 00:13:04.159 "data_size": 0 00:13:04.159 }, 00:13:04.159 { 00:13:04.159 "name": null, 00:13:04.159 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:04.159 "is_configured": false, 00:13:04.159 "data_offset": 0, 00:13:04.159 "data_size": 63488 00:13:04.159 }, 00:13:04.159 { 00:13:04.159 "name": "BaseBdev3", 00:13:04.159 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:04.159 "is_configured": true, 00:13:04.159 "data_offset": 2048, 00:13:04.159 "data_size": 63488 00:13:04.159 } 00:13:04.159 ] 00:13:04.159 }' 00:13:04.159 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.159 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.417 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.417 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.417 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.417 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.675 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:04.675 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.675 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 [2024-11-27 14:12:34.993032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.676 BaseBdev1 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.676 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 [ 00:13:04.676 { 00:13:04.676 "name": "BaseBdev1", 00:13:04.676 "aliases": [ 00:13:04.676 "67c551f2-c456-430f-afb3-3a1108d7a29f" 00:13:04.676 ], 00:13:04.676 "product_name": "Malloc disk", 00:13:04.676 "block_size": 512, 00:13:04.676 "num_blocks": 65536, 00:13:04.676 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:04.676 "assigned_rate_limits": { 00:13:04.676 "rw_ios_per_sec": 0, 00:13:04.676 "rw_mbytes_per_sec": 0, 00:13:04.676 "r_mbytes_per_sec": 0, 00:13:04.676 "w_mbytes_per_sec": 0 00:13:04.676 }, 00:13:04.676 "claimed": true, 00:13:04.676 "claim_type": "exclusive_write", 00:13:04.676 "zoned": false, 00:13:04.676 "supported_io_types": { 00:13:04.676 "read": true, 00:13:04.676 "write": true, 00:13:04.676 "unmap": true, 00:13:04.676 "flush": true, 00:13:04.676 "reset": true, 00:13:04.676 "nvme_admin": false, 00:13:04.676 "nvme_io": false, 00:13:04.676 "nvme_io_md": false, 00:13:04.676 "write_zeroes": true, 00:13:04.676 "zcopy": true, 00:13:04.676 "get_zone_info": false, 00:13:04.676 "zone_management": false, 00:13:04.676 "zone_append": false, 00:13:04.676 "compare": false, 00:13:04.676 "compare_and_write": false, 00:13:04.676 "abort": true, 00:13:04.676 "seek_hole": false, 00:13:04.676 "seek_data": false, 00:13:04.676 "copy": true, 00:13:04.676 "nvme_iov_md": false 00:13:04.676 }, 00:13:04.676 "memory_domains": [ 00:13:04.676 { 00:13:04.676 "dma_device_id": "system", 00:13:04.676 "dma_device_type": 1 00:13:04.676 }, 00:13:04.676 { 00:13:04.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.676 "dma_device_type": 2 00:13:04.676 } 00:13:04.676 ], 00:13:04.676 "driver_specific": {} 00:13:04.676 } 00:13:04.676 ] 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.676 "name": "Existed_Raid", 00:13:04.676 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:04.676 "strip_size_kb": 64, 00:13:04.676 "state": "configuring", 00:13:04.676 "raid_level": "concat", 00:13:04.676 "superblock": true, 00:13:04.676 "num_base_bdevs": 3, 00:13:04.676 "num_base_bdevs_discovered": 2, 00:13:04.676 "num_base_bdevs_operational": 3, 00:13:04.676 "base_bdevs_list": [ 00:13:04.676 { 00:13:04.676 "name": "BaseBdev1", 00:13:04.676 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:04.676 "is_configured": true, 00:13:04.676 "data_offset": 2048, 00:13:04.676 "data_size": 63488 00:13:04.676 }, 00:13:04.676 { 00:13:04.676 "name": null, 00:13:04.676 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:04.676 "is_configured": false, 00:13:04.676 "data_offset": 0, 00:13:04.676 "data_size": 63488 00:13:04.676 }, 00:13:04.676 { 00:13:04.676 "name": "BaseBdev3", 00:13:04.676 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:04.676 "is_configured": true, 00:13:04.676 "data_offset": 2048, 00:13:04.676 "data_size": 63488 00:13:04.676 } 00:13:04.676 ] 00:13:04.676 }' 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.676 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.243 [2024-11-27 14:12:35.581265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.243 "name": "Existed_Raid", 00:13:05.243 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:05.243 "strip_size_kb": 64, 00:13:05.243 "state": "configuring", 00:13:05.243 "raid_level": "concat", 00:13:05.243 "superblock": true, 00:13:05.243 "num_base_bdevs": 3, 00:13:05.243 "num_base_bdevs_discovered": 1, 00:13:05.243 "num_base_bdevs_operational": 3, 00:13:05.243 "base_bdevs_list": [ 00:13:05.243 { 00:13:05.243 "name": "BaseBdev1", 00:13:05.243 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:05.243 "is_configured": true, 00:13:05.243 "data_offset": 2048, 00:13:05.243 "data_size": 63488 00:13:05.243 }, 00:13:05.243 { 00:13:05.243 "name": null, 00:13:05.243 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:05.243 "is_configured": false, 00:13:05.243 "data_offset": 0, 00:13:05.243 "data_size": 63488 00:13:05.243 }, 00:13:05.243 { 00:13:05.243 "name": null, 00:13:05.243 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:05.243 "is_configured": false, 00:13:05.243 "data_offset": 0, 00:13:05.243 "data_size": 63488 00:13:05.243 } 00:13:05.243 ] 00:13:05.243 }' 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.243 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.810 [2024-11-27 14:12:36.141451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.810 "name": "Existed_Raid", 00:13:05.810 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:05.810 "strip_size_kb": 64, 00:13:05.810 "state": "configuring", 00:13:05.810 "raid_level": "concat", 00:13:05.810 "superblock": true, 00:13:05.810 "num_base_bdevs": 3, 00:13:05.810 "num_base_bdevs_discovered": 2, 00:13:05.810 "num_base_bdevs_operational": 3, 00:13:05.810 "base_bdevs_list": [ 00:13:05.810 { 00:13:05.810 "name": "BaseBdev1", 00:13:05.810 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:05.810 "is_configured": true, 00:13:05.810 "data_offset": 2048, 00:13:05.810 "data_size": 63488 00:13:05.810 }, 00:13:05.810 { 00:13:05.810 "name": null, 00:13:05.810 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:05.810 "is_configured": false, 00:13:05.810 "data_offset": 0, 00:13:05.810 "data_size": 63488 00:13:05.810 }, 00:13:05.810 { 00:13:05.810 "name": "BaseBdev3", 00:13:05.810 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:05.810 "is_configured": true, 00:13:05.810 "data_offset": 2048, 00:13:05.810 "data_size": 63488 00:13:05.810 } 00:13:05.810 ] 00:13:05.810 }' 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.810 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 [2024-11-27 14:12:36.705671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.377 "name": "Existed_Raid", 00:13:06.377 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:06.377 "strip_size_kb": 64, 00:13:06.377 "state": "configuring", 00:13:06.377 "raid_level": "concat", 00:13:06.377 "superblock": true, 00:13:06.377 "num_base_bdevs": 3, 00:13:06.377 "num_base_bdevs_discovered": 1, 00:13:06.377 "num_base_bdevs_operational": 3, 00:13:06.377 "base_bdevs_list": [ 00:13:06.377 { 00:13:06.377 "name": null, 00:13:06.377 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:06.377 "is_configured": false, 00:13:06.377 "data_offset": 0, 00:13:06.377 "data_size": 63488 00:13:06.377 }, 00:13:06.377 { 00:13:06.377 "name": null, 00:13:06.377 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:06.377 "is_configured": false, 00:13:06.377 "data_offset": 0, 00:13:06.377 "data_size": 63488 00:13:06.377 }, 00:13:06.377 { 00:13:06.377 "name": "BaseBdev3", 00:13:06.377 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:06.377 "is_configured": true, 00:13:06.377 "data_offset": 2048, 00:13:06.377 "data_size": 63488 00:13:06.377 } 00:13:06.377 ] 00:13:06.377 }' 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.377 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.944 [2024-11-27 14:12:37.370162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.944 "name": "Existed_Raid", 00:13:06.944 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:06.944 "strip_size_kb": 64, 00:13:06.944 "state": "configuring", 00:13:06.944 "raid_level": "concat", 00:13:06.944 "superblock": true, 00:13:06.944 "num_base_bdevs": 3, 00:13:06.944 "num_base_bdevs_discovered": 2, 00:13:06.944 "num_base_bdevs_operational": 3, 00:13:06.944 "base_bdevs_list": [ 00:13:06.944 { 00:13:06.944 "name": null, 00:13:06.944 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:06.944 "is_configured": false, 00:13:06.944 "data_offset": 0, 00:13:06.944 "data_size": 63488 00:13:06.944 }, 00:13:06.944 { 00:13:06.944 "name": "BaseBdev2", 00:13:06.944 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:06.944 "is_configured": true, 00:13:06.944 "data_offset": 2048, 00:13:06.944 "data_size": 63488 00:13:06.944 }, 00:13:06.944 { 00:13:06.944 "name": "BaseBdev3", 00:13:06.944 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:06.944 "is_configured": true, 00:13:06.944 "data_offset": 2048, 00:13:06.944 "data_size": 63488 00:13:06.944 } 00:13:06.944 ] 00:13:06.944 }' 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.944 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 67c551f2-c456-430f-afb3-3a1108d7a29f 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.510 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.769 [2024-11-27 14:12:38.040308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.769 [2024-11-27 14:12:38.040740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.769 [2024-11-27 14:12:38.040913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:07.769 [2024-11-27 14:12:38.041275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:07.769 NewBaseBdev 00:13:07.769 [2024-11-27 14:12:38.041602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.769 [2024-11-27 14:12:38.041627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:07.769 [2024-11-27 14:12:38.041841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.769 [ 00:13:07.769 { 00:13:07.769 "name": "NewBaseBdev", 00:13:07.769 "aliases": [ 00:13:07.769 "67c551f2-c456-430f-afb3-3a1108d7a29f" 00:13:07.769 ], 00:13:07.769 "product_name": "Malloc disk", 00:13:07.769 "block_size": 512, 00:13:07.769 "num_blocks": 65536, 00:13:07.769 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:07.769 "assigned_rate_limits": { 00:13:07.769 "rw_ios_per_sec": 0, 00:13:07.769 "rw_mbytes_per_sec": 0, 00:13:07.769 "r_mbytes_per_sec": 0, 00:13:07.769 "w_mbytes_per_sec": 0 00:13:07.769 }, 00:13:07.769 "claimed": true, 00:13:07.769 "claim_type": "exclusive_write", 00:13:07.769 "zoned": false, 00:13:07.769 "supported_io_types": { 00:13:07.769 "read": true, 00:13:07.769 "write": true, 00:13:07.769 "unmap": true, 00:13:07.769 "flush": true, 00:13:07.769 "reset": true, 00:13:07.769 "nvme_admin": false, 00:13:07.769 "nvme_io": false, 00:13:07.769 "nvme_io_md": false, 00:13:07.769 "write_zeroes": true, 00:13:07.769 "zcopy": true, 00:13:07.769 "get_zone_info": false, 00:13:07.769 "zone_management": false, 00:13:07.769 "zone_append": false, 00:13:07.769 "compare": false, 00:13:07.769 "compare_and_write": false, 00:13:07.769 "abort": true, 00:13:07.769 "seek_hole": false, 00:13:07.769 "seek_data": false, 00:13:07.769 "copy": true, 00:13:07.769 "nvme_iov_md": false 00:13:07.769 }, 00:13:07.769 "memory_domains": [ 00:13:07.769 { 00:13:07.769 "dma_device_id": "system", 00:13:07.769 "dma_device_type": 1 00:13:07.769 }, 00:13:07.769 { 00:13:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.769 "dma_device_type": 2 00:13:07.769 } 00:13:07.769 ], 00:13:07.769 "driver_specific": {} 00:13:07.769 } 00:13:07.769 ] 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.769 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.769 "name": "Existed_Raid", 00:13:07.769 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:07.769 "strip_size_kb": 64, 00:13:07.769 "state": "online", 00:13:07.769 "raid_level": "concat", 00:13:07.769 "superblock": true, 00:13:07.769 "num_base_bdevs": 3, 00:13:07.769 "num_base_bdevs_discovered": 3, 00:13:07.769 "num_base_bdevs_operational": 3, 00:13:07.769 "base_bdevs_list": [ 00:13:07.769 { 00:13:07.769 "name": "NewBaseBdev", 00:13:07.769 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:07.769 "is_configured": true, 00:13:07.769 "data_offset": 2048, 00:13:07.769 "data_size": 63488 00:13:07.770 }, 00:13:07.770 { 00:13:07.770 "name": "BaseBdev2", 00:13:07.770 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:07.770 "is_configured": true, 00:13:07.770 "data_offset": 2048, 00:13:07.770 "data_size": 63488 00:13:07.770 }, 00:13:07.770 { 00:13:07.770 "name": "BaseBdev3", 00:13:07.770 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:07.770 "is_configured": true, 00:13:07.770 "data_offset": 2048, 00:13:07.770 "data_size": 63488 00:13:07.770 } 00:13:07.770 ] 00:13:07.770 }' 00:13:07.770 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.770 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.450 [2024-11-27 14:12:38.620915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.450 "name": "Existed_Raid", 00:13:08.450 "aliases": [ 00:13:08.450 "4abc4588-f23c-4b2b-9eef-9d239db0ce38" 00:13:08.450 ], 00:13:08.450 "product_name": "Raid Volume", 00:13:08.450 "block_size": 512, 00:13:08.450 "num_blocks": 190464, 00:13:08.450 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:08.450 "assigned_rate_limits": { 00:13:08.450 "rw_ios_per_sec": 0, 00:13:08.450 "rw_mbytes_per_sec": 0, 00:13:08.450 "r_mbytes_per_sec": 0, 00:13:08.450 "w_mbytes_per_sec": 0 00:13:08.450 }, 00:13:08.450 "claimed": false, 00:13:08.450 "zoned": false, 00:13:08.450 "supported_io_types": { 00:13:08.450 "read": true, 00:13:08.450 "write": true, 00:13:08.450 "unmap": true, 00:13:08.450 "flush": true, 00:13:08.450 "reset": true, 00:13:08.450 "nvme_admin": false, 00:13:08.450 "nvme_io": false, 00:13:08.450 "nvme_io_md": false, 00:13:08.450 "write_zeroes": true, 00:13:08.450 "zcopy": false, 00:13:08.450 "get_zone_info": false, 00:13:08.450 "zone_management": false, 00:13:08.450 "zone_append": false, 00:13:08.450 "compare": false, 00:13:08.450 "compare_and_write": false, 00:13:08.450 "abort": false, 00:13:08.450 "seek_hole": false, 00:13:08.450 "seek_data": false, 00:13:08.450 "copy": false, 00:13:08.450 "nvme_iov_md": false 00:13:08.450 }, 00:13:08.450 "memory_domains": [ 00:13:08.450 { 00:13:08.450 "dma_device_id": "system", 00:13:08.450 "dma_device_type": 1 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.450 "dma_device_type": 2 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "dma_device_id": "system", 00:13:08.450 "dma_device_type": 1 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.450 "dma_device_type": 2 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "dma_device_id": "system", 00:13:08.450 "dma_device_type": 1 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.450 "dma_device_type": 2 00:13:08.450 } 00:13:08.450 ], 00:13:08.450 "driver_specific": { 00:13:08.450 "raid": { 00:13:08.450 "uuid": "4abc4588-f23c-4b2b-9eef-9d239db0ce38", 00:13:08.450 "strip_size_kb": 64, 00:13:08.450 "state": "online", 00:13:08.450 "raid_level": "concat", 00:13:08.450 "superblock": true, 00:13:08.450 "num_base_bdevs": 3, 00:13:08.450 "num_base_bdevs_discovered": 3, 00:13:08.450 "num_base_bdevs_operational": 3, 00:13:08.450 "base_bdevs_list": [ 00:13:08.450 { 00:13:08.450 "name": "NewBaseBdev", 00:13:08.450 "uuid": "67c551f2-c456-430f-afb3-3a1108d7a29f", 00:13:08.450 "is_configured": true, 00:13:08.450 "data_offset": 2048, 00:13:08.450 "data_size": 63488 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "name": "BaseBdev2", 00:13:08.450 "uuid": "0b4931d9-588c-4193-97ff-20f72e5b6620", 00:13:08.450 "is_configured": true, 00:13:08.450 "data_offset": 2048, 00:13:08.450 "data_size": 63488 00:13:08.450 }, 00:13:08.450 { 00:13:08.450 "name": "BaseBdev3", 00:13:08.450 "uuid": "34656d1b-e5f6-40e9-8d24-dda7556e2d45", 00:13:08.450 "is_configured": true, 00:13:08.450 "data_offset": 2048, 00:13:08.450 "data_size": 63488 00:13:08.450 } 00:13:08.450 ] 00:13:08.450 } 00:13:08.450 } 00:13:08.450 }' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:08.450 BaseBdev2 00:13:08.450 BaseBdev3' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.450 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.451 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.709 [2024-11-27 14:12:38.952598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.709 [2024-11-27 14:12:38.952756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.709 [2024-11-27 14:12:38.952895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.709 [2024-11-27 14:12:38.952974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.709 [2024-11-27 14:12:38.952995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66434 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66434 ']' 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66434 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66434 00:13:08.709 killing process with pid 66434 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66434' 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66434 00:13:08.709 [2024-11-27 14:12:38.991259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.709 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66434 00:13:08.968 [2024-11-27 14:12:39.248772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.902 14:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.902 00:13:09.902 real 0m11.877s 00:13:09.902 user 0m19.670s 00:13:09.902 sys 0m1.612s 00:13:09.902 ************************************ 00:13:09.902 END TEST raid_state_function_test_sb 00:13:09.902 ************************************ 00:13:09.902 14:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.902 14:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 14:12:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:09.902 14:12:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.902 14:12:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.902 14:12:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 ************************************ 00:13:09.902 START TEST raid_superblock_test 00:13:09.902 ************************************ 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:09.902 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67072 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67072 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67072 ']' 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.903 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.160 [2024-11-27 14:12:40.438656] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:10.160 [2024-11-27 14:12:40.438862] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67072 ] 00:13:10.160 [2024-11-27 14:12:40.620505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.418 [2024-11-27 14:12:40.780934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.677 [2024-11-27 14:12:41.009094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.677 [2024-11-27 14:12:41.009167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.245 malloc1 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.245 [2024-11-27 14:12:41.536561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.245 [2024-11-27 14:12:41.536777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.245 [2024-11-27 14:12:41.536980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.245 [2024-11-27 14:12:41.537128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.245 [2024-11-27 14:12:41.540098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.245 [2024-11-27 14:12:41.540271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.245 pt1 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.245 malloc2 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.245 [2024-11-27 14:12:41.595028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.245 [2024-11-27 14:12:41.595101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.245 [2024-11-27 14:12:41.595140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.245 [2024-11-27 14:12:41.595155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.245 [2024-11-27 14:12:41.598147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.245 [2024-11-27 14:12:41.598194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.245 pt2 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.245 malloc3 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.245 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.246 [2024-11-27 14:12:41.661785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:11.246 [2024-11-27 14:12:41.661893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.246 [2024-11-27 14:12:41.661943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:11.246 [2024-11-27 14:12:41.661960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.246 [2024-11-27 14:12:41.664997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.246 [2024-11-27 14:12:41.665041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:11.246 pt3 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.246 [2024-11-27 14:12:41.673946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:11.246 [2024-11-27 14:12:41.676584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.246 [2024-11-27 14:12:41.676704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:11.246 [2024-11-27 14:12:41.676948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:11.246 [2024-11-27 14:12:41.676976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:11.246 [2024-11-27 14:12:41.677295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.246 [2024-11-27 14:12:41.677530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:11.246 [2024-11-27 14:12:41.677558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:11.246 [2024-11-27 14:12:41.677787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.246 "name": "raid_bdev1", 00:13:11.246 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:11.246 "strip_size_kb": 64, 00:13:11.246 "state": "online", 00:13:11.246 "raid_level": "concat", 00:13:11.246 "superblock": true, 00:13:11.246 "num_base_bdevs": 3, 00:13:11.246 "num_base_bdevs_discovered": 3, 00:13:11.246 "num_base_bdevs_operational": 3, 00:13:11.246 "base_bdevs_list": [ 00:13:11.246 { 00:13:11.246 "name": "pt1", 00:13:11.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.246 "is_configured": true, 00:13:11.246 "data_offset": 2048, 00:13:11.246 "data_size": 63488 00:13:11.246 }, 00:13:11.246 { 00:13:11.246 "name": "pt2", 00:13:11.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.246 "is_configured": true, 00:13:11.246 "data_offset": 2048, 00:13:11.246 "data_size": 63488 00:13:11.246 }, 00:13:11.246 { 00:13:11.246 "name": "pt3", 00:13:11.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.246 "is_configured": true, 00:13:11.246 "data_offset": 2048, 00:13:11.246 "data_size": 63488 00:13:11.246 } 00:13:11.246 ] 00:13:11.246 }' 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.246 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.814 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.815 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.815 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.815 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.815 [2024-11-27 14:12:42.238500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.815 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.815 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.815 "name": "raid_bdev1", 00:13:11.815 "aliases": [ 00:13:11.815 "3daa616a-7413-4f09-9a5c-cf514587995c" 00:13:11.815 ], 00:13:11.815 "product_name": "Raid Volume", 00:13:11.815 "block_size": 512, 00:13:11.815 "num_blocks": 190464, 00:13:11.815 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:11.815 "assigned_rate_limits": { 00:13:11.815 "rw_ios_per_sec": 0, 00:13:11.815 "rw_mbytes_per_sec": 0, 00:13:11.815 "r_mbytes_per_sec": 0, 00:13:11.815 "w_mbytes_per_sec": 0 00:13:11.815 }, 00:13:11.815 "claimed": false, 00:13:11.815 "zoned": false, 00:13:11.815 "supported_io_types": { 00:13:11.815 "read": true, 00:13:11.815 "write": true, 00:13:11.815 "unmap": true, 00:13:11.815 "flush": true, 00:13:11.815 "reset": true, 00:13:11.815 "nvme_admin": false, 00:13:11.815 "nvme_io": false, 00:13:11.815 "nvme_io_md": false, 00:13:11.815 "write_zeroes": true, 00:13:11.815 "zcopy": false, 00:13:11.815 "get_zone_info": false, 00:13:11.815 "zone_management": false, 00:13:11.815 "zone_append": false, 00:13:11.815 "compare": false, 00:13:11.815 "compare_and_write": false, 00:13:11.815 "abort": false, 00:13:11.815 "seek_hole": false, 00:13:11.815 "seek_data": false, 00:13:11.815 "copy": false, 00:13:11.815 "nvme_iov_md": false 00:13:11.815 }, 00:13:11.815 "memory_domains": [ 00:13:11.815 { 00:13:11.815 "dma_device_id": "system", 00:13:11.815 "dma_device_type": 1 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.815 "dma_device_type": 2 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "dma_device_id": "system", 00:13:11.815 "dma_device_type": 1 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.815 "dma_device_type": 2 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "dma_device_id": "system", 00:13:11.815 "dma_device_type": 1 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.815 "dma_device_type": 2 00:13:11.815 } 00:13:11.815 ], 00:13:11.815 "driver_specific": { 00:13:11.815 "raid": { 00:13:11.815 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:11.815 "strip_size_kb": 64, 00:13:11.815 "state": "online", 00:13:11.815 "raid_level": "concat", 00:13:11.815 "superblock": true, 00:13:11.815 "num_base_bdevs": 3, 00:13:11.815 "num_base_bdevs_discovered": 3, 00:13:11.815 "num_base_bdevs_operational": 3, 00:13:11.815 "base_bdevs_list": [ 00:13:11.815 { 00:13:11.815 "name": "pt1", 00:13:11.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.815 "is_configured": true, 00:13:11.815 "data_offset": 2048, 00:13:11.815 "data_size": 63488 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "name": "pt2", 00:13:11.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.815 "is_configured": true, 00:13:11.815 "data_offset": 2048, 00:13:11.815 "data_size": 63488 00:13:11.815 }, 00:13:11.815 { 00:13:11.815 "name": "pt3", 00:13:11.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.815 "is_configured": true, 00:13:11.815 "data_offset": 2048, 00:13:11.815 "data_size": 63488 00:13:11.815 } 00:13:11.815 ] 00:13:11.815 } 00:13:11.815 } 00:13:11.815 }' 00:13:11.815 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:12.074 pt2 00:13:12.074 pt3' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 [2024-11-27 14:12:42.550540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.074 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3daa616a-7413-4f09-9a5c-cf514587995c 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3daa616a-7413-4f09-9a5c-cf514587995c ']' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 [2024-11-27 14:12:42.602165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.333 [2024-11-27 14:12:42.602207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.333 [2024-11-27 14:12:42.602322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.333 [2024-11-27 14:12:42.602423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.333 [2024-11-27 14:12:42.602450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 [2024-11-27 14:12:42.754319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:12.333 [2024-11-27 14:12:42.756858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:12.333 [2024-11-27 14:12:42.756944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:12.333 [2024-11-27 14:12:42.757025] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:12.333 [2024-11-27 14:12:42.757114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:12.333 [2024-11-27 14:12:42.757150] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:12.333 [2024-11-27 14:12:42.757187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.333 [2024-11-27 14:12:42.757202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:12.333 request: 00:13:12.333 { 00:13:12.333 "name": "raid_bdev1", 00:13:12.333 "raid_level": "concat", 00:13:12.333 "base_bdevs": [ 00:13:12.333 "malloc1", 00:13:12.333 "malloc2", 00:13:12.333 "malloc3" 00:13:12.333 ], 00:13:12.333 "strip_size_kb": 64, 00:13:12.333 "superblock": false, 00:13:12.333 "method": "bdev_raid_create", 00:13:12.333 "req_id": 1 00:13:12.333 } 00:13:12.333 Got JSON-RPC error response 00:13:12.333 response: 00:13:12.333 { 00:13:12.333 "code": -17, 00:13:12.333 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:12.333 } 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.333 [2024-11-27 14:12:42.818241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.333 [2024-11-27 14:12:42.818346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.333 [2024-11-27 14:12:42.818403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:12.333 [2024-11-27 14:12:42.818444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.333 [2024-11-27 14:12:42.821523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.333 [2024-11-27 14:12:42.821569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.333 [2024-11-27 14:12:42.821682] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:12.333 [2024-11-27 14:12:42.821755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.333 pt1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.333 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.334 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.591 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.591 "name": "raid_bdev1", 00:13:12.591 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:12.591 "strip_size_kb": 64, 00:13:12.591 "state": "configuring", 00:13:12.591 "raid_level": "concat", 00:13:12.591 "superblock": true, 00:13:12.591 "num_base_bdevs": 3, 00:13:12.591 "num_base_bdevs_discovered": 1, 00:13:12.591 "num_base_bdevs_operational": 3, 00:13:12.591 "base_bdevs_list": [ 00:13:12.591 { 00:13:12.591 "name": "pt1", 00:13:12.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.591 "is_configured": true, 00:13:12.591 "data_offset": 2048, 00:13:12.591 "data_size": 63488 00:13:12.591 }, 00:13:12.591 { 00:13:12.591 "name": null, 00:13:12.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.591 "is_configured": false, 00:13:12.591 "data_offset": 2048, 00:13:12.591 "data_size": 63488 00:13:12.591 }, 00:13:12.591 { 00:13:12.591 "name": null, 00:13:12.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.591 "is_configured": false, 00:13:12.591 "data_offset": 2048, 00:13:12.591 "data_size": 63488 00:13:12.591 } 00:13:12.591 ] 00:13:12.591 }' 00:13:12.591 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.591 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.849 [2024-11-27 14:12:43.346414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.849 [2024-11-27 14:12:43.346512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.849 [2024-11-27 14:12:43.346553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:12.849 [2024-11-27 14:12:43.346569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.849 [2024-11-27 14:12:43.347161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.849 [2024-11-27 14:12:43.347202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.849 [2024-11-27 14:12:43.347375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.849 [2024-11-27 14:12:43.347420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.849 pt2 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.849 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.849 [2024-11-27 14:12:43.354378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.108 "name": "raid_bdev1", 00:13:13.108 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:13.108 "strip_size_kb": 64, 00:13:13.108 "state": "configuring", 00:13:13.108 "raid_level": "concat", 00:13:13.108 "superblock": true, 00:13:13.108 "num_base_bdevs": 3, 00:13:13.108 "num_base_bdevs_discovered": 1, 00:13:13.108 "num_base_bdevs_operational": 3, 00:13:13.108 "base_bdevs_list": [ 00:13:13.108 { 00:13:13.108 "name": "pt1", 00:13:13.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.108 "is_configured": true, 00:13:13.108 "data_offset": 2048, 00:13:13.108 "data_size": 63488 00:13:13.108 }, 00:13:13.108 { 00:13:13.108 "name": null, 00:13:13.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.108 "is_configured": false, 00:13:13.108 "data_offset": 0, 00:13:13.108 "data_size": 63488 00:13:13.108 }, 00:13:13.108 { 00:13:13.108 "name": null, 00:13:13.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.108 "is_configured": false, 00:13:13.108 "data_offset": 2048, 00:13:13.108 "data_size": 63488 00:13:13.108 } 00:13:13.108 ] 00:13:13.108 }' 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.108 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 [2024-11-27 14:12:43.890534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.674 [2024-11-27 14:12:43.890624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.674 [2024-11-27 14:12:43.890664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:13.674 [2024-11-27 14:12:43.890685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.674 [2024-11-27 14:12:43.891350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.674 [2024-11-27 14:12:43.891395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.674 [2024-11-27 14:12:43.891514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.674 [2024-11-27 14:12:43.891556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.674 pt2 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 [2024-11-27 14:12:43.898495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:13.674 [2024-11-27 14:12:43.898553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.674 [2024-11-27 14:12:43.898575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:13.674 [2024-11-27 14:12:43.898592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.674 [2024-11-27 14:12:43.899086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.674 [2024-11-27 14:12:43.899137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:13.674 [2024-11-27 14:12:43.899219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:13.674 [2024-11-27 14:12:43.899254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.674 [2024-11-27 14:12:43.899408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.674 [2024-11-27 14:12:43.899440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:13.674 [2024-11-27 14:12:43.899761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:13.674 [2024-11-27 14:12:43.899986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.674 [2024-11-27 14:12:43.900011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:13.674 [2024-11-27 14:12:43.900189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.674 pt3 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.674 "name": "raid_bdev1", 00:13:13.674 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:13.674 "strip_size_kb": 64, 00:13:13.674 "state": "online", 00:13:13.674 "raid_level": "concat", 00:13:13.674 "superblock": true, 00:13:13.674 "num_base_bdevs": 3, 00:13:13.674 "num_base_bdevs_discovered": 3, 00:13:13.674 "num_base_bdevs_operational": 3, 00:13:13.674 "base_bdevs_list": [ 00:13:13.674 { 00:13:13.674 "name": "pt1", 00:13:13.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.674 "is_configured": true, 00:13:13.674 "data_offset": 2048, 00:13:13.674 "data_size": 63488 00:13:13.674 }, 00:13:13.674 { 00:13:13.674 "name": "pt2", 00:13:13.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.674 "is_configured": true, 00:13:13.674 "data_offset": 2048, 00:13:13.674 "data_size": 63488 00:13:13.674 }, 00:13:13.674 { 00:13:13.674 "name": "pt3", 00:13:13.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.674 "is_configured": true, 00:13:13.674 "data_offset": 2048, 00:13:13.674 "data_size": 63488 00:13:13.674 } 00:13:13.674 ] 00:13:13.674 }' 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.674 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.936 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.936 [2024-11-27 14:12:44.439088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.195 "name": "raid_bdev1", 00:13:14.195 "aliases": [ 00:13:14.195 "3daa616a-7413-4f09-9a5c-cf514587995c" 00:13:14.195 ], 00:13:14.195 "product_name": "Raid Volume", 00:13:14.195 "block_size": 512, 00:13:14.195 "num_blocks": 190464, 00:13:14.195 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:14.195 "assigned_rate_limits": { 00:13:14.195 "rw_ios_per_sec": 0, 00:13:14.195 "rw_mbytes_per_sec": 0, 00:13:14.195 "r_mbytes_per_sec": 0, 00:13:14.195 "w_mbytes_per_sec": 0 00:13:14.195 }, 00:13:14.195 "claimed": false, 00:13:14.195 "zoned": false, 00:13:14.195 "supported_io_types": { 00:13:14.195 "read": true, 00:13:14.195 "write": true, 00:13:14.195 "unmap": true, 00:13:14.195 "flush": true, 00:13:14.195 "reset": true, 00:13:14.195 "nvme_admin": false, 00:13:14.195 "nvme_io": false, 00:13:14.195 "nvme_io_md": false, 00:13:14.195 "write_zeroes": true, 00:13:14.195 "zcopy": false, 00:13:14.195 "get_zone_info": false, 00:13:14.195 "zone_management": false, 00:13:14.195 "zone_append": false, 00:13:14.195 "compare": false, 00:13:14.195 "compare_and_write": false, 00:13:14.195 "abort": false, 00:13:14.195 "seek_hole": false, 00:13:14.195 "seek_data": false, 00:13:14.195 "copy": false, 00:13:14.195 "nvme_iov_md": false 00:13:14.195 }, 00:13:14.195 "memory_domains": [ 00:13:14.195 { 00:13:14.195 "dma_device_id": "system", 00:13:14.195 "dma_device_type": 1 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.195 "dma_device_type": 2 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "dma_device_id": "system", 00:13:14.195 "dma_device_type": 1 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.195 "dma_device_type": 2 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "dma_device_id": "system", 00:13:14.195 "dma_device_type": 1 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.195 "dma_device_type": 2 00:13:14.195 } 00:13:14.195 ], 00:13:14.195 "driver_specific": { 00:13:14.195 "raid": { 00:13:14.195 "uuid": "3daa616a-7413-4f09-9a5c-cf514587995c", 00:13:14.195 "strip_size_kb": 64, 00:13:14.195 "state": "online", 00:13:14.195 "raid_level": "concat", 00:13:14.195 "superblock": true, 00:13:14.195 "num_base_bdevs": 3, 00:13:14.195 "num_base_bdevs_discovered": 3, 00:13:14.195 "num_base_bdevs_operational": 3, 00:13:14.195 "base_bdevs_list": [ 00:13:14.195 { 00:13:14.195 "name": "pt1", 00:13:14.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.195 "is_configured": true, 00:13:14.195 "data_offset": 2048, 00:13:14.195 "data_size": 63488 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "name": "pt2", 00:13:14.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.195 "is_configured": true, 00:13:14.195 "data_offset": 2048, 00:13:14.195 "data_size": 63488 00:13:14.195 }, 00:13:14.195 { 00:13:14.195 "name": "pt3", 00:13:14.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.195 "is_configured": true, 00:13:14.195 "data_offset": 2048, 00:13:14.195 "data_size": 63488 00:13:14.195 } 00:13:14.195 ] 00:13:14.195 } 00:13:14.195 } 00:13:14.195 }' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:14.195 pt2 00:13:14.195 pt3' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.195 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:14.196 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.196 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.196 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.454 [2024-11-27 14:12:44.763095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3daa616a-7413-4f09-9a5c-cf514587995c '!=' 3daa616a-7413-4f09-9a5c-cf514587995c ']' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67072 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67072 ']' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67072 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67072 00:13:14.454 killing process with pid 67072 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67072' 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67072 00:13:14.454 [2024-11-27 14:12:44.842095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.454 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67072 00:13:14.454 [2024-11-27 14:12:44.842202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.454 [2024-11-27 14:12:44.842283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.454 [2024-11-27 14:12:44.842309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:14.712 [2024-11-27 14:12:45.112812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.084 14:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:16.084 00:13:16.084 real 0m5.819s 00:13:16.084 user 0m8.792s 00:13:16.084 sys 0m0.861s 00:13:16.084 14:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.084 14:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 ************************************ 00:13:16.084 END TEST raid_superblock_test 00:13:16.084 ************************************ 00:13:16.084 14:12:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:16.084 14:12:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:16.084 14:12:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.084 14:12:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 ************************************ 00:13:16.084 START TEST raid_read_error_test 00:13:16.084 ************************************ 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nJOtWNg0VP 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67331 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67331 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67331 ']' 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.084 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 [2024-11-27 14:12:46.339279] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:16.084 [2024-11-27 14:12:46.339458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67331 ] 00:13:16.084 [2024-11-27 14:12:46.525565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.342 [2024-11-27 14:12:46.656549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.601 [2024-11-27 14:12:46.861196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.601 [2024-11-27 14:12:46.861276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.859 BaseBdev1_malloc 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.859 true 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.859 [2024-11-27 14:12:47.358943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:16.859 [2024-11-27 14:12:47.359010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.859 [2024-11-27 14:12:47.359040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:16.859 [2024-11-27 14:12:47.359059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.859 [2024-11-27 14:12:47.361841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.859 [2024-11-27 14:12:47.361889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.859 BaseBdev1 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.859 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 BaseBdev2_malloc 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 true 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 [2024-11-27 14:12:47.424074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:17.118 [2024-11-27 14:12:47.424139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.118 [2024-11-27 14:12:47.424165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:17.118 [2024-11-27 14:12:47.424183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.118 [2024-11-27 14:12:47.427151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.118 [2024-11-27 14:12:47.427197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.118 BaseBdev2 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 BaseBdev3_malloc 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 true 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 [2024-11-27 14:12:47.498379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:17.118 [2024-11-27 14:12:47.498447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.118 [2024-11-27 14:12:47.498476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:17.118 [2024-11-27 14:12:47.498496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.118 [2024-11-27 14:12:47.501438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.118 [2024-11-27 14:12:47.501487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:17.118 BaseBdev3 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.118 [2024-11-27 14:12:47.506484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.118 [2024-11-27 14:12:47.508986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.118 [2024-11-27 14:12:47.509104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.118 [2024-11-27 14:12:47.509368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:17.118 [2024-11-27 14:12:47.509397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:17.118 [2024-11-27 14:12:47.509723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:17.118 [2024-11-27 14:12:47.509990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:17.118 [2024-11-27 14:12:47.510037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:17.118 [2024-11-27 14:12:47.510220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.118 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.119 "name": "raid_bdev1", 00:13:17.119 "uuid": "9f489326-937d-4317-a9bf-721325d03490", 00:13:17.119 "strip_size_kb": 64, 00:13:17.119 "state": "online", 00:13:17.119 "raid_level": "concat", 00:13:17.119 "superblock": true, 00:13:17.119 "num_base_bdevs": 3, 00:13:17.119 "num_base_bdevs_discovered": 3, 00:13:17.119 "num_base_bdevs_operational": 3, 00:13:17.119 "base_bdevs_list": [ 00:13:17.119 { 00:13:17.119 "name": "BaseBdev1", 00:13:17.119 "uuid": "3390aa66-f9f0-549e-8f6f-3cfc1bebf025", 00:13:17.119 "is_configured": true, 00:13:17.119 "data_offset": 2048, 00:13:17.119 "data_size": 63488 00:13:17.119 }, 00:13:17.119 { 00:13:17.119 "name": "BaseBdev2", 00:13:17.119 "uuid": "15c9ca57-7004-5f45-9a6e-38d11fdc63ed", 00:13:17.119 "is_configured": true, 00:13:17.119 "data_offset": 2048, 00:13:17.119 "data_size": 63488 00:13:17.119 }, 00:13:17.119 { 00:13:17.119 "name": "BaseBdev3", 00:13:17.119 "uuid": "3cf69776-1765-52d0-ae9e-5190ebe29a55", 00:13:17.119 "is_configured": true, 00:13:17.119 "data_offset": 2048, 00:13:17.119 "data_size": 63488 00:13:17.119 } 00:13:17.119 ] 00:13:17.119 }' 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.119 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.685 14:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:17.685 14:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:17.685 [2024-11-27 14:12:48.120032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.620 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.621 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.621 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.621 "name": "raid_bdev1", 00:13:18.621 "uuid": "9f489326-937d-4317-a9bf-721325d03490", 00:13:18.621 "strip_size_kb": 64, 00:13:18.621 "state": "online", 00:13:18.621 "raid_level": "concat", 00:13:18.621 "superblock": true, 00:13:18.621 "num_base_bdevs": 3, 00:13:18.621 "num_base_bdevs_discovered": 3, 00:13:18.621 "num_base_bdevs_operational": 3, 00:13:18.621 "base_bdevs_list": [ 00:13:18.621 { 00:13:18.621 "name": "BaseBdev1", 00:13:18.621 "uuid": "3390aa66-f9f0-549e-8f6f-3cfc1bebf025", 00:13:18.621 "is_configured": true, 00:13:18.621 "data_offset": 2048, 00:13:18.621 "data_size": 63488 00:13:18.621 }, 00:13:18.621 { 00:13:18.621 "name": "BaseBdev2", 00:13:18.621 "uuid": "15c9ca57-7004-5f45-9a6e-38d11fdc63ed", 00:13:18.621 "is_configured": true, 00:13:18.621 "data_offset": 2048, 00:13:18.621 "data_size": 63488 00:13:18.621 }, 00:13:18.621 { 00:13:18.621 "name": "BaseBdev3", 00:13:18.621 "uuid": "3cf69776-1765-52d0-ae9e-5190ebe29a55", 00:13:18.621 "is_configured": true, 00:13:18.621 "data_offset": 2048, 00:13:18.621 "data_size": 63488 00:13:18.621 } 00:13:18.621 ] 00:13:18.621 }' 00:13:18.621 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.621 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.187 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.187 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.187 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.187 [2024-11-27 14:12:49.531250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.187 [2024-11-27 14:12:49.531292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.187 [2024-11-27 14:12:49.534715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.187 [2024-11-27 14:12:49.534779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.187 [2024-11-27 14:12:49.534852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.187 [2024-11-27 14:12:49.534873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:19.187 { 00:13:19.187 "results": [ 00:13:19.187 { 00:13:19.187 "job": "raid_bdev1", 00:13:19.187 "core_mask": "0x1", 00:13:19.188 "workload": "randrw", 00:13:19.188 "percentage": 50, 00:13:19.188 "status": "finished", 00:13:19.188 "queue_depth": 1, 00:13:19.188 "io_size": 131072, 00:13:19.188 "runtime": 1.408896, 00:13:19.188 "iops": 10463.511856091578, 00:13:19.188 "mibps": 1307.9389820114473, 00:13:19.188 "io_failed": 1, 00:13:19.188 "io_timeout": 0, 00:13:19.188 "avg_latency_us": 132.90807816344275, 00:13:19.188 "min_latency_us": 40.96, 00:13:19.188 "max_latency_us": 1876.7127272727273 00:13:19.188 } 00:13:19.188 ], 00:13:19.188 "core_count": 1 00:13:19.188 } 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67331 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67331 ']' 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67331 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67331 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.188 killing process with pid 67331 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67331' 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67331 00:13:19.188 [2024-11-27 14:12:49.569391] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.188 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67331 00:13:19.446 [2024-11-27 14:12:49.777453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nJOtWNg0VP 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:20.382 00:13:20.382 real 0m4.674s 00:13:20.382 user 0m5.755s 00:13:20.382 sys 0m0.585s 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.382 14:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.382 ************************************ 00:13:20.382 END TEST raid_read_error_test 00:13:20.382 ************************************ 00:13:20.641 14:12:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:20.641 14:12:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:20.641 14:12:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.641 14:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.641 ************************************ 00:13:20.641 START TEST raid_write_error_test 00:13:20.641 ************************************ 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QDkSYfn6vR 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67471 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67471 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67471 ']' 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.641 14:12:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.641 [2024-11-27 14:12:51.043193] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:20.641 [2024-11-27 14:12:51.043336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67471 ] 00:13:20.899 [2024-11-27 14:12:51.217952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.899 [2024-11-27 14:12:51.354032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.158 [2024-11-27 14:12:51.559980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.158 [2024-11-27 14:12:51.560066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 BaseBdev1_malloc 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 true 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 [2024-11-27 14:12:52.089369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:21.725 [2024-11-27 14:12:52.089590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.725 [2024-11-27 14:12:52.089668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:21.725 [2024-11-27 14:12:52.089930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.725 [2024-11-27 14:12:52.092780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.725 [2024-11-27 14:12:52.092983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.725 BaseBdev1 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 BaseBdev2_malloc 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 true 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.725 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.725 [2024-11-27 14:12:52.157607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:21.725 [2024-11-27 14:12:52.157811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.726 [2024-11-27 14:12:52.157898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:21.726 [2024-11-27 14:12:52.158166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.726 [2024-11-27 14:12:52.160973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.726 [2024-11-27 14:12:52.161026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:21.726 BaseBdev2 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.726 BaseBdev3_malloc 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.726 true 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.726 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.984 [2024-11-27 14:12:52.238083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:21.984 [2024-11-27 14:12:52.238290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.984 [2024-11-27 14:12:52.238366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:21.984 [2024-11-27 14:12:52.238618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.984 [2024-11-27 14:12:52.241609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.984 [2024-11-27 14:12:52.241663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:21.984 BaseBdev3 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.984 [2024-11-27 14:12:52.250369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.984 [2024-11-27 14:12:52.252801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.984 [2024-11-27 14:12:52.252935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.984 [2024-11-27 14:12:52.253226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:21.984 [2024-11-27 14:12:52.253245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:21.984 [2024-11-27 14:12:52.253557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:21.984 [2024-11-27 14:12:52.253782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:21.984 [2024-11-27 14:12:52.253831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:21.984 [2024-11-27 14:12:52.254045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.984 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.984 "name": "raid_bdev1", 00:13:21.984 "uuid": "cbb52492-d5f2-4c87-a550-976ab13d4cdd", 00:13:21.984 "strip_size_kb": 64, 00:13:21.984 "state": "online", 00:13:21.984 "raid_level": "concat", 00:13:21.984 "superblock": true, 00:13:21.984 "num_base_bdevs": 3, 00:13:21.984 "num_base_bdevs_discovered": 3, 00:13:21.984 "num_base_bdevs_operational": 3, 00:13:21.984 "base_bdevs_list": [ 00:13:21.984 { 00:13:21.984 "name": "BaseBdev1", 00:13:21.984 "uuid": "208a688b-2e35-51ec-a140-20fee4b91ccb", 00:13:21.984 "is_configured": true, 00:13:21.984 "data_offset": 2048, 00:13:21.984 "data_size": 63488 00:13:21.984 }, 00:13:21.984 { 00:13:21.984 "name": "BaseBdev2", 00:13:21.984 "uuid": "bebb9946-433e-54d6-910d-97d1c6b4b772", 00:13:21.984 "is_configured": true, 00:13:21.985 "data_offset": 2048, 00:13:21.985 "data_size": 63488 00:13:21.985 }, 00:13:21.985 { 00:13:21.985 "name": "BaseBdev3", 00:13:21.985 "uuid": "4858e812-07ce-5706-ae1e-715cf4d39283", 00:13:21.985 "is_configured": true, 00:13:21.985 "data_offset": 2048, 00:13:21.985 "data_size": 63488 00:13:21.985 } 00:13:21.985 ] 00:13:21.985 }' 00:13:21.985 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.985 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.243 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:22.243 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:22.501 [2024-11-27 14:12:52.875967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.437 "name": "raid_bdev1", 00:13:23.437 "uuid": "cbb52492-d5f2-4c87-a550-976ab13d4cdd", 00:13:23.437 "strip_size_kb": 64, 00:13:23.437 "state": "online", 00:13:23.437 "raid_level": "concat", 00:13:23.437 "superblock": true, 00:13:23.437 "num_base_bdevs": 3, 00:13:23.437 "num_base_bdevs_discovered": 3, 00:13:23.437 "num_base_bdevs_operational": 3, 00:13:23.437 "base_bdevs_list": [ 00:13:23.437 { 00:13:23.437 "name": "BaseBdev1", 00:13:23.437 "uuid": "208a688b-2e35-51ec-a140-20fee4b91ccb", 00:13:23.437 "is_configured": true, 00:13:23.437 "data_offset": 2048, 00:13:23.437 "data_size": 63488 00:13:23.437 }, 00:13:23.437 { 00:13:23.437 "name": "BaseBdev2", 00:13:23.437 "uuid": "bebb9946-433e-54d6-910d-97d1c6b4b772", 00:13:23.437 "is_configured": true, 00:13:23.437 "data_offset": 2048, 00:13:23.437 "data_size": 63488 00:13:23.437 }, 00:13:23.437 { 00:13:23.437 "name": "BaseBdev3", 00:13:23.437 "uuid": "4858e812-07ce-5706-ae1e-715cf4d39283", 00:13:23.437 "is_configured": true, 00:13:23.437 "data_offset": 2048, 00:13:23.437 "data_size": 63488 00:13:23.437 } 00:13:23.437 ] 00:13:23.437 }' 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.437 14:12:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.006 [2024-11-27 14:12:54.292389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.006 [2024-11-27 14:12:54.292426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.006 [2024-11-27 14:12:54.296020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.006 [2024-11-27 14:12:54.296082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.006 [2024-11-27 14:12:54.296138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.006 [2024-11-27 14:12:54.296156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:24.006 { 00:13:24.006 "results": [ 00:13:24.006 { 00:13:24.006 "job": "raid_bdev1", 00:13:24.006 "core_mask": "0x1", 00:13:24.006 "workload": "randrw", 00:13:24.006 "percentage": 50, 00:13:24.006 "status": "finished", 00:13:24.006 "queue_depth": 1, 00:13:24.006 "io_size": 131072, 00:13:24.006 "runtime": 1.413784, 00:13:24.006 "iops": 10203.114478590789, 00:13:24.006 "mibps": 1275.3893098238486, 00:13:24.006 "io_failed": 1, 00:13:24.006 "io_timeout": 0, 00:13:24.006 "avg_latency_us": 136.31645891887123, 00:13:24.006 "min_latency_us": 42.589090909090906, 00:13:24.006 "max_latency_us": 1846.9236363636364 00:13:24.006 } 00:13:24.006 ], 00:13:24.006 "core_count": 1 00:13:24.006 } 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67471 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67471 ']' 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67471 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67471 00:13:24.006 killing process with pid 67471 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67471' 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67471 00:13:24.006 [2024-11-27 14:12:54.330032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.006 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67471 00:13:24.265 [2024-11-27 14:12:54.537746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QDkSYfn6vR 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:25.201 ************************************ 00:13:25.201 END TEST raid_write_error_test 00:13:25.201 ************************************ 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:25.201 00:13:25.201 real 0m4.711s 00:13:25.201 user 0m5.826s 00:13:25.201 sys 0m0.573s 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.201 14:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.201 14:12:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:25.201 14:12:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:25.201 14:12:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:25.201 14:12:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.201 14:12:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.201 ************************************ 00:13:25.201 START TEST raid_state_function_test 00:13:25.201 ************************************ 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.201 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67621 00:13:25.460 Process raid pid: 67621 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67621' 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67621 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67621 ']' 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.460 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.460 [2024-11-27 14:12:55.827590] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:25.460 [2024-11-27 14:12:55.827772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.719 [2024-11-27 14:12:56.012056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.719 [2024-11-27 14:12:56.146378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.977 [2024-11-27 14:12:56.363807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.978 [2024-11-27 14:12:56.363869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.545 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.545 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:26.545 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.545 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.546 [2024-11-27 14:12:56.764276] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.546 [2024-11-27 14:12:56.764356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.546 [2024-11-27 14:12:56.764375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.546 [2024-11-27 14:12:56.764391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.546 [2024-11-27 14:12:56.764401] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.546 [2024-11-27 14:12:56.764415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.546 "name": "Existed_Raid", 00:13:26.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.546 "strip_size_kb": 0, 00:13:26.546 "state": "configuring", 00:13:26.546 "raid_level": "raid1", 00:13:26.546 "superblock": false, 00:13:26.546 "num_base_bdevs": 3, 00:13:26.546 "num_base_bdevs_discovered": 0, 00:13:26.546 "num_base_bdevs_operational": 3, 00:13:26.546 "base_bdevs_list": [ 00:13:26.546 { 00:13:26.546 "name": "BaseBdev1", 00:13:26.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.546 "is_configured": false, 00:13:26.546 "data_offset": 0, 00:13:26.546 "data_size": 0 00:13:26.546 }, 00:13:26.546 { 00:13:26.546 "name": "BaseBdev2", 00:13:26.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.546 "is_configured": false, 00:13:26.546 "data_offset": 0, 00:13:26.546 "data_size": 0 00:13:26.546 }, 00:13:26.546 { 00:13:26.546 "name": "BaseBdev3", 00:13:26.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.546 "is_configured": false, 00:13:26.546 "data_offset": 0, 00:13:26.546 "data_size": 0 00:13:26.546 } 00:13:26.546 ] 00:13:26.546 }' 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.546 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.804 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.804 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.804 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.804 [2024-11-27 14:12:57.280358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.804 [2024-11-27 14:12:57.280549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.805 [2024-11-27 14:12:57.292342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.805 [2024-11-27 14:12:57.292543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.805 [2024-11-27 14:12:57.292669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.805 [2024-11-27 14:12:57.292847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.805 [2024-11-27 14:12:57.292976] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.805 [2024-11-27 14:12:57.293038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.805 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 [2024-11-27 14:12:57.338233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.064 BaseBdev1 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.064 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 [ 00:13:27.064 { 00:13:27.064 "name": "BaseBdev1", 00:13:27.064 "aliases": [ 00:13:27.064 "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6" 00:13:27.065 ], 00:13:27.065 "product_name": "Malloc disk", 00:13:27.065 "block_size": 512, 00:13:27.065 "num_blocks": 65536, 00:13:27.065 "uuid": "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6", 00:13:27.065 "assigned_rate_limits": { 00:13:27.065 "rw_ios_per_sec": 0, 00:13:27.065 "rw_mbytes_per_sec": 0, 00:13:27.065 "r_mbytes_per_sec": 0, 00:13:27.065 "w_mbytes_per_sec": 0 00:13:27.065 }, 00:13:27.065 "claimed": true, 00:13:27.065 "claim_type": "exclusive_write", 00:13:27.065 "zoned": false, 00:13:27.065 "supported_io_types": { 00:13:27.065 "read": true, 00:13:27.065 "write": true, 00:13:27.065 "unmap": true, 00:13:27.065 "flush": true, 00:13:27.065 "reset": true, 00:13:27.065 "nvme_admin": false, 00:13:27.065 "nvme_io": false, 00:13:27.065 "nvme_io_md": false, 00:13:27.065 "write_zeroes": true, 00:13:27.065 "zcopy": true, 00:13:27.065 "get_zone_info": false, 00:13:27.065 "zone_management": false, 00:13:27.065 "zone_append": false, 00:13:27.065 "compare": false, 00:13:27.065 "compare_and_write": false, 00:13:27.065 "abort": true, 00:13:27.065 "seek_hole": false, 00:13:27.065 "seek_data": false, 00:13:27.065 "copy": true, 00:13:27.065 "nvme_iov_md": false 00:13:27.065 }, 00:13:27.065 "memory_domains": [ 00:13:27.065 { 00:13:27.065 "dma_device_id": "system", 00:13:27.065 "dma_device_type": 1 00:13:27.065 }, 00:13:27.065 { 00:13:27.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.065 "dma_device_type": 2 00:13:27.065 } 00:13:27.065 ], 00:13:27.065 "driver_specific": {} 00:13:27.065 } 00:13:27.065 ] 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.065 "name": "Existed_Raid", 00:13:27.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.065 "strip_size_kb": 0, 00:13:27.065 "state": "configuring", 00:13:27.065 "raid_level": "raid1", 00:13:27.065 "superblock": false, 00:13:27.065 "num_base_bdevs": 3, 00:13:27.065 "num_base_bdevs_discovered": 1, 00:13:27.065 "num_base_bdevs_operational": 3, 00:13:27.065 "base_bdevs_list": [ 00:13:27.065 { 00:13:27.065 "name": "BaseBdev1", 00:13:27.065 "uuid": "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6", 00:13:27.065 "is_configured": true, 00:13:27.065 "data_offset": 0, 00:13:27.065 "data_size": 65536 00:13:27.065 }, 00:13:27.065 { 00:13:27.065 "name": "BaseBdev2", 00:13:27.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.065 "is_configured": false, 00:13:27.065 "data_offset": 0, 00:13:27.065 "data_size": 0 00:13:27.065 }, 00:13:27.065 { 00:13:27.065 "name": "BaseBdev3", 00:13:27.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.065 "is_configured": false, 00:13:27.065 "data_offset": 0, 00:13:27.065 "data_size": 0 00:13:27.065 } 00:13:27.065 ] 00:13:27.065 }' 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.065 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.632 [2024-11-27 14:12:57.894427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.632 [2024-11-27 14:12:57.894492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.632 [2024-11-27 14:12:57.902463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.632 [2024-11-27 14:12:57.904936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.632 [2024-11-27 14:12:57.905138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.632 [2024-11-27 14:12:57.905167] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:27.632 [2024-11-27 14:12:57.905186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.632 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.633 "name": "Existed_Raid", 00:13:27.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.633 "strip_size_kb": 0, 00:13:27.633 "state": "configuring", 00:13:27.633 "raid_level": "raid1", 00:13:27.633 "superblock": false, 00:13:27.633 "num_base_bdevs": 3, 00:13:27.633 "num_base_bdevs_discovered": 1, 00:13:27.633 "num_base_bdevs_operational": 3, 00:13:27.633 "base_bdevs_list": [ 00:13:27.633 { 00:13:27.633 "name": "BaseBdev1", 00:13:27.633 "uuid": "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev2", 00:13:27.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.633 "is_configured": false, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 0 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev3", 00:13:27.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.633 "is_configured": false, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 0 00:13:27.633 } 00:13:27.633 ] 00:13:27.633 }' 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.633 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.211 [2024-11-27 14:12:58.485086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.211 BaseBdev2 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.211 [ 00:13:28.211 { 00:13:28.211 "name": "BaseBdev2", 00:13:28.211 "aliases": [ 00:13:28.211 "e7e2484a-055d-455f-9caf-6a6abd4cb902" 00:13:28.211 ], 00:13:28.211 "product_name": "Malloc disk", 00:13:28.211 "block_size": 512, 00:13:28.211 "num_blocks": 65536, 00:13:28.211 "uuid": "e7e2484a-055d-455f-9caf-6a6abd4cb902", 00:13:28.211 "assigned_rate_limits": { 00:13:28.211 "rw_ios_per_sec": 0, 00:13:28.211 "rw_mbytes_per_sec": 0, 00:13:28.211 "r_mbytes_per_sec": 0, 00:13:28.211 "w_mbytes_per_sec": 0 00:13:28.211 }, 00:13:28.211 "claimed": true, 00:13:28.211 "claim_type": "exclusive_write", 00:13:28.211 "zoned": false, 00:13:28.211 "supported_io_types": { 00:13:28.211 "read": true, 00:13:28.211 "write": true, 00:13:28.211 "unmap": true, 00:13:28.211 "flush": true, 00:13:28.211 "reset": true, 00:13:28.211 "nvme_admin": false, 00:13:28.211 "nvme_io": false, 00:13:28.211 "nvme_io_md": false, 00:13:28.211 "write_zeroes": true, 00:13:28.211 "zcopy": true, 00:13:28.211 "get_zone_info": false, 00:13:28.211 "zone_management": false, 00:13:28.211 "zone_append": false, 00:13:28.211 "compare": false, 00:13:28.211 "compare_and_write": false, 00:13:28.211 "abort": true, 00:13:28.211 "seek_hole": false, 00:13:28.211 "seek_data": false, 00:13:28.211 "copy": true, 00:13:28.211 "nvme_iov_md": false 00:13:28.211 }, 00:13:28.211 "memory_domains": [ 00:13:28.211 { 00:13:28.211 "dma_device_id": "system", 00:13:28.211 "dma_device_type": 1 00:13:28.211 }, 00:13:28.211 { 00:13:28.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.211 "dma_device_type": 2 00:13:28.211 } 00:13:28.211 ], 00:13:28.211 "driver_specific": {} 00:13:28.211 } 00:13:28.211 ] 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.211 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.212 "name": "Existed_Raid", 00:13:28.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.212 "strip_size_kb": 0, 00:13:28.212 "state": "configuring", 00:13:28.212 "raid_level": "raid1", 00:13:28.212 "superblock": false, 00:13:28.212 "num_base_bdevs": 3, 00:13:28.212 "num_base_bdevs_discovered": 2, 00:13:28.212 "num_base_bdevs_operational": 3, 00:13:28.212 "base_bdevs_list": [ 00:13:28.212 { 00:13:28.212 "name": "BaseBdev1", 00:13:28.212 "uuid": "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6", 00:13:28.212 "is_configured": true, 00:13:28.212 "data_offset": 0, 00:13:28.212 "data_size": 65536 00:13:28.212 }, 00:13:28.212 { 00:13:28.212 "name": "BaseBdev2", 00:13:28.212 "uuid": "e7e2484a-055d-455f-9caf-6a6abd4cb902", 00:13:28.212 "is_configured": true, 00:13:28.212 "data_offset": 0, 00:13:28.212 "data_size": 65536 00:13:28.212 }, 00:13:28.212 { 00:13:28.212 "name": "BaseBdev3", 00:13:28.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.212 "is_configured": false, 00:13:28.212 "data_offset": 0, 00:13:28.212 "data_size": 0 00:13:28.212 } 00:13:28.212 ] 00:13:28.212 }' 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.212 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 [2024-11-27 14:12:59.126347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.779 [2024-11-27 14:12:59.126620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:28.779 [2024-11-27 14:12:59.126657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:28.779 [2024-11-27 14:12:59.127054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.779 [2024-11-27 14:12:59.127287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:28.779 [2024-11-27 14:12:59.127305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:28.779 [2024-11-27 14:12:59.127634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.779 BaseBdev3 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 [ 00:13:28.779 { 00:13:28.779 "name": "BaseBdev3", 00:13:28.779 "aliases": [ 00:13:28.779 "c904fcf8-84ec-4182-a02e-e98898472192" 00:13:28.779 ], 00:13:28.779 "product_name": "Malloc disk", 00:13:28.779 "block_size": 512, 00:13:28.779 "num_blocks": 65536, 00:13:28.779 "uuid": "c904fcf8-84ec-4182-a02e-e98898472192", 00:13:28.779 "assigned_rate_limits": { 00:13:28.779 "rw_ios_per_sec": 0, 00:13:28.779 "rw_mbytes_per_sec": 0, 00:13:28.779 "r_mbytes_per_sec": 0, 00:13:28.779 "w_mbytes_per_sec": 0 00:13:28.779 }, 00:13:28.779 "claimed": true, 00:13:28.779 "claim_type": "exclusive_write", 00:13:28.779 "zoned": false, 00:13:28.779 "supported_io_types": { 00:13:28.779 "read": true, 00:13:28.779 "write": true, 00:13:28.779 "unmap": true, 00:13:28.779 "flush": true, 00:13:28.779 "reset": true, 00:13:28.779 "nvme_admin": false, 00:13:28.779 "nvme_io": false, 00:13:28.779 "nvme_io_md": false, 00:13:28.779 "write_zeroes": true, 00:13:28.779 "zcopy": true, 00:13:28.779 "get_zone_info": false, 00:13:28.779 "zone_management": false, 00:13:28.779 "zone_append": false, 00:13:28.779 "compare": false, 00:13:28.779 "compare_and_write": false, 00:13:28.779 "abort": true, 00:13:28.779 "seek_hole": false, 00:13:28.779 "seek_data": false, 00:13:28.779 "copy": true, 00:13:28.779 "nvme_iov_md": false 00:13:28.779 }, 00:13:28.779 "memory_domains": [ 00:13:28.779 { 00:13:28.779 "dma_device_id": "system", 00:13:28.779 "dma_device_type": 1 00:13:28.779 }, 00:13:28.779 { 00:13:28.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.779 "dma_device_type": 2 00:13:28.779 } 00:13:28.779 ], 00:13:28.779 "driver_specific": {} 00:13:28.779 } 00:13:28.779 ] 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.779 "name": "Existed_Raid", 00:13:28.779 "uuid": "d503862d-0569-4aa7-9f41-855bfed0243c", 00:13:28.779 "strip_size_kb": 0, 00:13:28.779 "state": "online", 00:13:28.779 "raid_level": "raid1", 00:13:28.779 "superblock": false, 00:13:28.779 "num_base_bdevs": 3, 00:13:28.779 "num_base_bdevs_discovered": 3, 00:13:28.779 "num_base_bdevs_operational": 3, 00:13:28.779 "base_bdevs_list": [ 00:13:28.779 { 00:13:28.779 "name": "BaseBdev1", 00:13:28.779 "uuid": "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6", 00:13:28.779 "is_configured": true, 00:13:28.779 "data_offset": 0, 00:13:28.779 "data_size": 65536 00:13:28.779 }, 00:13:28.779 { 00:13:28.779 "name": "BaseBdev2", 00:13:28.779 "uuid": "e7e2484a-055d-455f-9caf-6a6abd4cb902", 00:13:28.779 "is_configured": true, 00:13:28.779 "data_offset": 0, 00:13:28.779 "data_size": 65536 00:13:28.779 }, 00:13:28.779 { 00:13:28.779 "name": "BaseBdev3", 00:13:28.779 "uuid": "c904fcf8-84ec-4182-a02e-e98898472192", 00:13:28.779 "is_configured": true, 00:13:28.779 "data_offset": 0, 00:13:28.779 "data_size": 65536 00:13:28.779 } 00:13:28.779 ] 00:13:28.779 }' 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.779 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.345 [2024-11-27 14:12:59.702959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.345 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.345 "name": "Existed_Raid", 00:13:29.345 "aliases": [ 00:13:29.345 "d503862d-0569-4aa7-9f41-855bfed0243c" 00:13:29.345 ], 00:13:29.345 "product_name": "Raid Volume", 00:13:29.345 "block_size": 512, 00:13:29.345 "num_blocks": 65536, 00:13:29.345 "uuid": "d503862d-0569-4aa7-9f41-855bfed0243c", 00:13:29.345 "assigned_rate_limits": { 00:13:29.345 "rw_ios_per_sec": 0, 00:13:29.345 "rw_mbytes_per_sec": 0, 00:13:29.345 "r_mbytes_per_sec": 0, 00:13:29.345 "w_mbytes_per_sec": 0 00:13:29.345 }, 00:13:29.345 "claimed": false, 00:13:29.345 "zoned": false, 00:13:29.345 "supported_io_types": { 00:13:29.345 "read": true, 00:13:29.345 "write": true, 00:13:29.345 "unmap": false, 00:13:29.345 "flush": false, 00:13:29.345 "reset": true, 00:13:29.345 "nvme_admin": false, 00:13:29.345 "nvme_io": false, 00:13:29.345 "nvme_io_md": false, 00:13:29.345 "write_zeroes": true, 00:13:29.345 "zcopy": false, 00:13:29.345 "get_zone_info": false, 00:13:29.345 "zone_management": false, 00:13:29.345 "zone_append": false, 00:13:29.345 "compare": false, 00:13:29.345 "compare_and_write": false, 00:13:29.345 "abort": false, 00:13:29.345 "seek_hole": false, 00:13:29.345 "seek_data": false, 00:13:29.345 "copy": false, 00:13:29.345 "nvme_iov_md": false 00:13:29.345 }, 00:13:29.345 "memory_domains": [ 00:13:29.345 { 00:13:29.345 "dma_device_id": "system", 00:13:29.345 "dma_device_type": 1 00:13:29.345 }, 00:13:29.345 { 00:13:29.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.345 "dma_device_type": 2 00:13:29.345 }, 00:13:29.345 { 00:13:29.345 "dma_device_id": "system", 00:13:29.345 "dma_device_type": 1 00:13:29.345 }, 00:13:29.345 { 00:13:29.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.345 "dma_device_type": 2 00:13:29.345 }, 00:13:29.345 { 00:13:29.345 "dma_device_id": "system", 00:13:29.345 "dma_device_type": 1 00:13:29.345 }, 00:13:29.345 { 00:13:29.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.345 "dma_device_type": 2 00:13:29.345 } 00:13:29.345 ], 00:13:29.345 "driver_specific": { 00:13:29.345 "raid": { 00:13:29.345 "uuid": "d503862d-0569-4aa7-9f41-855bfed0243c", 00:13:29.345 "strip_size_kb": 0, 00:13:29.345 "state": "online", 00:13:29.345 "raid_level": "raid1", 00:13:29.345 "superblock": false, 00:13:29.345 "num_base_bdevs": 3, 00:13:29.345 "num_base_bdevs_discovered": 3, 00:13:29.345 "num_base_bdevs_operational": 3, 00:13:29.345 "base_bdevs_list": [ 00:13:29.345 { 00:13:29.345 "name": "BaseBdev1", 00:13:29.345 "uuid": "5fc5cb6c-8b56-4bf1-8ce5-9ee89cbea3c6", 00:13:29.345 "is_configured": true, 00:13:29.345 "data_offset": 0, 00:13:29.345 "data_size": 65536 00:13:29.345 }, 00:13:29.345 { 00:13:29.345 "name": "BaseBdev2", 00:13:29.345 "uuid": "e7e2484a-055d-455f-9caf-6a6abd4cb902", 00:13:29.345 "is_configured": true, 00:13:29.345 "data_offset": 0, 00:13:29.345 "data_size": 65536 00:13:29.345 }, 00:13:29.345 { 00:13:29.346 "name": "BaseBdev3", 00:13:29.346 "uuid": "c904fcf8-84ec-4182-a02e-e98898472192", 00:13:29.346 "is_configured": true, 00:13:29.346 "data_offset": 0, 00:13:29.346 "data_size": 65536 00:13:29.346 } 00:13:29.346 ] 00:13:29.346 } 00:13:29.346 } 00:13:29.346 }' 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:29.346 BaseBdev2 00:13:29.346 BaseBdev3' 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.346 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.605 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.606 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.606 [2024-11-27 14:13:00.010711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.606 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.865 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.865 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.865 "name": "Existed_Raid", 00:13:29.865 "uuid": "d503862d-0569-4aa7-9f41-855bfed0243c", 00:13:29.865 "strip_size_kb": 0, 00:13:29.865 "state": "online", 00:13:29.865 "raid_level": "raid1", 00:13:29.865 "superblock": false, 00:13:29.865 "num_base_bdevs": 3, 00:13:29.865 "num_base_bdevs_discovered": 2, 00:13:29.865 "num_base_bdevs_operational": 2, 00:13:29.865 "base_bdevs_list": [ 00:13:29.865 { 00:13:29.865 "name": null, 00:13:29.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.865 "is_configured": false, 00:13:29.865 "data_offset": 0, 00:13:29.865 "data_size": 65536 00:13:29.865 }, 00:13:29.865 { 00:13:29.865 "name": "BaseBdev2", 00:13:29.865 "uuid": "e7e2484a-055d-455f-9caf-6a6abd4cb902", 00:13:29.865 "is_configured": true, 00:13:29.865 "data_offset": 0, 00:13:29.865 "data_size": 65536 00:13:29.865 }, 00:13:29.865 { 00:13:29.865 "name": "BaseBdev3", 00:13:29.865 "uuid": "c904fcf8-84ec-4182-a02e-e98898472192", 00:13:29.865 "is_configured": true, 00:13:29.865 "data_offset": 0, 00:13:29.865 "data_size": 65536 00:13:29.865 } 00:13:29.865 ] 00:13:29.865 }' 00:13:29.865 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.865 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.123 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:30.123 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.123 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.123 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.123 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.123 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.381 [2024-11-27 14:13:00.667225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.381 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.381 [2024-11-27 14:13:00.805926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.381 [2024-11-27 14:13:00.806063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.643 [2024-11-27 14:13:00.893174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.643 [2024-11-27 14:13:00.893467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.643 [2024-11-27 14:13:00.893505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 BaseBdev2 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 [ 00:13:30.643 { 00:13:30.643 "name": "BaseBdev2", 00:13:30.643 "aliases": [ 00:13:30.643 "9edefdc0-b064-499a-93ab-55dba8cdc1d6" 00:13:30.643 ], 00:13:30.643 "product_name": "Malloc disk", 00:13:30.643 "block_size": 512, 00:13:30.643 "num_blocks": 65536, 00:13:30.643 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:30.643 "assigned_rate_limits": { 00:13:30.643 "rw_ios_per_sec": 0, 00:13:30.643 "rw_mbytes_per_sec": 0, 00:13:30.643 "r_mbytes_per_sec": 0, 00:13:30.643 "w_mbytes_per_sec": 0 00:13:30.643 }, 00:13:30.643 "claimed": false, 00:13:30.643 "zoned": false, 00:13:30.643 "supported_io_types": { 00:13:30.643 "read": true, 00:13:30.643 "write": true, 00:13:30.643 "unmap": true, 00:13:30.643 "flush": true, 00:13:30.643 "reset": true, 00:13:30.643 "nvme_admin": false, 00:13:30.643 "nvme_io": false, 00:13:30.643 "nvme_io_md": false, 00:13:30.643 "write_zeroes": true, 00:13:30.643 "zcopy": true, 00:13:30.643 "get_zone_info": false, 00:13:30.643 "zone_management": false, 00:13:30.643 "zone_append": false, 00:13:30.643 "compare": false, 00:13:30.643 "compare_and_write": false, 00:13:30.643 "abort": true, 00:13:30.643 "seek_hole": false, 00:13:30.643 "seek_data": false, 00:13:30.643 "copy": true, 00:13:30.643 "nvme_iov_md": false 00:13:30.643 }, 00:13:30.643 "memory_domains": [ 00:13:30.643 { 00:13:30.643 "dma_device_id": "system", 00:13:30.643 "dma_device_type": 1 00:13:30.643 }, 00:13:30.643 { 00:13:30.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.643 "dma_device_type": 2 00:13:30.643 } 00:13:30.643 ], 00:13:30.643 "driver_specific": {} 00:13:30.643 } 00:13:30.643 ] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 BaseBdev3 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 [ 00:13:30.643 { 00:13:30.643 "name": "BaseBdev3", 00:13:30.643 "aliases": [ 00:13:30.643 "6d883653-6036-4b7c-b8f5-0a9a28f6a656" 00:13:30.643 ], 00:13:30.643 "product_name": "Malloc disk", 00:13:30.643 "block_size": 512, 00:13:30.643 "num_blocks": 65536, 00:13:30.643 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:30.643 "assigned_rate_limits": { 00:13:30.643 "rw_ios_per_sec": 0, 00:13:30.643 "rw_mbytes_per_sec": 0, 00:13:30.643 "r_mbytes_per_sec": 0, 00:13:30.643 "w_mbytes_per_sec": 0 00:13:30.643 }, 00:13:30.643 "claimed": false, 00:13:30.643 "zoned": false, 00:13:30.643 "supported_io_types": { 00:13:30.643 "read": true, 00:13:30.643 "write": true, 00:13:30.643 "unmap": true, 00:13:30.643 "flush": true, 00:13:30.643 "reset": true, 00:13:30.643 "nvme_admin": false, 00:13:30.643 "nvme_io": false, 00:13:30.643 "nvme_io_md": false, 00:13:30.643 "write_zeroes": true, 00:13:30.643 "zcopy": true, 00:13:30.643 "get_zone_info": false, 00:13:30.643 "zone_management": false, 00:13:30.643 "zone_append": false, 00:13:30.643 "compare": false, 00:13:30.643 "compare_and_write": false, 00:13:30.643 "abort": true, 00:13:30.643 "seek_hole": false, 00:13:30.643 "seek_data": false, 00:13:30.643 "copy": true, 00:13:30.643 "nvme_iov_md": false 00:13:30.643 }, 00:13:30.643 "memory_domains": [ 00:13:30.643 { 00:13:30.643 "dma_device_id": "system", 00:13:30.643 "dma_device_type": 1 00:13:30.643 }, 00:13:30.643 { 00:13:30.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.643 "dma_device_type": 2 00:13:30.643 } 00:13:30.643 ], 00:13:30.643 "driver_specific": {} 00:13:30.643 } 00:13:30.643 ] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.643 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 [2024-11-27 14:13:01.106052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.643 [2024-11-27 14:13:01.106261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.644 [2024-11-27 14:13:01.106439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.644 [2024-11-27 14:13:01.108921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.644 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.902 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.902 "name": "Existed_Raid", 00:13:30.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.902 "strip_size_kb": 0, 00:13:30.902 "state": "configuring", 00:13:30.902 "raid_level": "raid1", 00:13:30.902 "superblock": false, 00:13:30.902 "num_base_bdevs": 3, 00:13:30.902 "num_base_bdevs_discovered": 2, 00:13:30.902 "num_base_bdevs_operational": 3, 00:13:30.902 "base_bdevs_list": [ 00:13:30.902 { 00:13:30.902 "name": "BaseBdev1", 00:13:30.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.902 "is_configured": false, 00:13:30.902 "data_offset": 0, 00:13:30.902 "data_size": 0 00:13:30.902 }, 00:13:30.902 { 00:13:30.902 "name": "BaseBdev2", 00:13:30.902 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:30.902 "is_configured": true, 00:13:30.902 "data_offset": 0, 00:13:30.902 "data_size": 65536 00:13:30.902 }, 00:13:30.902 { 00:13:30.902 "name": "BaseBdev3", 00:13:30.902 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:30.902 "is_configured": true, 00:13:30.902 "data_offset": 0, 00:13:30.902 "data_size": 65536 00:13:30.902 } 00:13:30.902 ] 00:13:30.902 }' 00:13:30.902 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.902 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.160 [2024-11-27 14:13:01.590215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.160 "name": "Existed_Raid", 00:13:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.160 "strip_size_kb": 0, 00:13:31.160 "state": "configuring", 00:13:31.160 "raid_level": "raid1", 00:13:31.160 "superblock": false, 00:13:31.160 "num_base_bdevs": 3, 00:13:31.160 "num_base_bdevs_discovered": 1, 00:13:31.160 "num_base_bdevs_operational": 3, 00:13:31.160 "base_bdevs_list": [ 00:13:31.160 { 00:13:31.160 "name": "BaseBdev1", 00:13:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.160 "is_configured": false, 00:13:31.160 "data_offset": 0, 00:13:31.160 "data_size": 0 00:13:31.160 }, 00:13:31.160 { 00:13:31.160 "name": null, 00:13:31.160 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:31.160 "is_configured": false, 00:13:31.160 "data_offset": 0, 00:13:31.160 "data_size": 65536 00:13:31.160 }, 00:13:31.160 { 00:13:31.160 "name": "BaseBdev3", 00:13:31.160 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:31.160 "is_configured": true, 00:13:31.160 "data_offset": 0, 00:13:31.160 "data_size": 65536 00:13:31.160 } 00:13:31.160 ] 00:13:31.160 }' 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.160 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.727 [2024-11-27 14:13:02.216373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.727 BaseBdev1 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.727 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.728 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.728 [ 00:13:31.728 { 00:13:31.728 "name": "BaseBdev1", 00:13:31.728 "aliases": [ 00:13:31.728 "ad8999cb-f7ab-45a9-a262-0584446844a7" 00:13:31.728 ], 00:13:31.728 "product_name": "Malloc disk", 00:13:31.728 "block_size": 512, 00:13:31.728 "num_blocks": 65536, 00:13:31.728 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:31.728 "assigned_rate_limits": { 00:13:31.728 "rw_ios_per_sec": 0, 00:13:31.728 "rw_mbytes_per_sec": 0, 00:13:31.728 "r_mbytes_per_sec": 0, 00:13:31.728 "w_mbytes_per_sec": 0 00:13:32.004 }, 00:13:32.004 "claimed": true, 00:13:32.004 "claim_type": "exclusive_write", 00:13:32.004 "zoned": false, 00:13:32.004 "supported_io_types": { 00:13:32.004 "read": true, 00:13:32.004 "write": true, 00:13:32.004 "unmap": true, 00:13:32.004 "flush": true, 00:13:32.004 "reset": true, 00:13:32.004 "nvme_admin": false, 00:13:32.004 "nvme_io": false, 00:13:32.004 "nvme_io_md": false, 00:13:32.004 "write_zeroes": true, 00:13:32.004 "zcopy": true, 00:13:32.004 "get_zone_info": false, 00:13:32.004 "zone_management": false, 00:13:32.004 "zone_append": false, 00:13:32.004 "compare": false, 00:13:32.004 "compare_and_write": false, 00:13:32.004 "abort": true, 00:13:32.004 "seek_hole": false, 00:13:32.004 "seek_data": false, 00:13:32.004 "copy": true, 00:13:32.004 "nvme_iov_md": false 00:13:32.004 }, 00:13:32.004 "memory_domains": [ 00:13:32.004 { 00:13:32.004 "dma_device_id": "system", 00:13:32.004 "dma_device_type": 1 00:13:32.004 }, 00:13:32.004 { 00:13:32.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.004 "dma_device_type": 2 00:13:32.004 } 00:13:32.004 ], 00:13:32.004 "driver_specific": {} 00:13:32.004 } 00:13:32.004 ] 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.004 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.005 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.005 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.005 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.005 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.005 "name": "Existed_Raid", 00:13:32.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.005 "strip_size_kb": 0, 00:13:32.005 "state": "configuring", 00:13:32.005 "raid_level": "raid1", 00:13:32.005 "superblock": false, 00:13:32.005 "num_base_bdevs": 3, 00:13:32.005 "num_base_bdevs_discovered": 2, 00:13:32.005 "num_base_bdevs_operational": 3, 00:13:32.005 "base_bdevs_list": [ 00:13:32.005 { 00:13:32.005 "name": "BaseBdev1", 00:13:32.005 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:32.005 "is_configured": true, 00:13:32.005 "data_offset": 0, 00:13:32.005 "data_size": 65536 00:13:32.005 }, 00:13:32.005 { 00:13:32.005 "name": null, 00:13:32.005 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:32.005 "is_configured": false, 00:13:32.005 "data_offset": 0, 00:13:32.005 "data_size": 65536 00:13:32.005 }, 00:13:32.005 { 00:13:32.005 "name": "BaseBdev3", 00:13:32.005 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:32.005 "is_configured": true, 00:13:32.005 "data_offset": 0, 00:13:32.005 "data_size": 65536 00:13:32.005 } 00:13:32.005 ] 00:13:32.005 }' 00:13:32.005 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.005 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.575 [2024-11-27 14:13:02.832593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.575 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.575 "name": "Existed_Raid", 00:13:32.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.575 "strip_size_kb": 0, 00:13:32.575 "state": "configuring", 00:13:32.575 "raid_level": "raid1", 00:13:32.575 "superblock": false, 00:13:32.575 "num_base_bdevs": 3, 00:13:32.575 "num_base_bdevs_discovered": 1, 00:13:32.575 "num_base_bdevs_operational": 3, 00:13:32.575 "base_bdevs_list": [ 00:13:32.575 { 00:13:32.575 "name": "BaseBdev1", 00:13:32.575 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:32.575 "is_configured": true, 00:13:32.575 "data_offset": 0, 00:13:32.575 "data_size": 65536 00:13:32.575 }, 00:13:32.575 { 00:13:32.575 "name": null, 00:13:32.575 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:32.575 "is_configured": false, 00:13:32.575 "data_offset": 0, 00:13:32.575 "data_size": 65536 00:13:32.575 }, 00:13:32.575 { 00:13:32.575 "name": null, 00:13:32.575 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:32.575 "is_configured": false, 00:13:32.575 "data_offset": 0, 00:13:32.575 "data_size": 65536 00:13:32.576 } 00:13:32.576 ] 00:13:32.576 }' 00:13:32.576 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.576 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.142 [2024-11-27 14:13:03.404798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.142 "name": "Existed_Raid", 00:13:33.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.142 "strip_size_kb": 0, 00:13:33.142 "state": "configuring", 00:13:33.142 "raid_level": "raid1", 00:13:33.142 "superblock": false, 00:13:33.142 "num_base_bdevs": 3, 00:13:33.142 "num_base_bdevs_discovered": 2, 00:13:33.142 "num_base_bdevs_operational": 3, 00:13:33.142 "base_bdevs_list": [ 00:13:33.142 { 00:13:33.142 "name": "BaseBdev1", 00:13:33.142 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:33.142 "is_configured": true, 00:13:33.142 "data_offset": 0, 00:13:33.142 "data_size": 65536 00:13:33.142 }, 00:13:33.142 { 00:13:33.142 "name": null, 00:13:33.142 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:33.142 "is_configured": false, 00:13:33.142 "data_offset": 0, 00:13:33.142 "data_size": 65536 00:13:33.142 }, 00:13:33.142 { 00:13:33.142 "name": "BaseBdev3", 00:13:33.142 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:33.142 "is_configured": true, 00:13:33.142 "data_offset": 0, 00:13:33.142 "data_size": 65536 00:13:33.142 } 00:13:33.142 ] 00:13:33.142 }' 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.142 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.709 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.709 [2024-11-27 14:13:03.964945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.709 "name": "Existed_Raid", 00:13:33.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.709 "strip_size_kb": 0, 00:13:33.709 "state": "configuring", 00:13:33.709 "raid_level": "raid1", 00:13:33.709 "superblock": false, 00:13:33.709 "num_base_bdevs": 3, 00:13:33.709 "num_base_bdevs_discovered": 1, 00:13:33.709 "num_base_bdevs_operational": 3, 00:13:33.709 "base_bdevs_list": [ 00:13:33.709 { 00:13:33.709 "name": null, 00:13:33.709 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:33.709 "is_configured": false, 00:13:33.709 "data_offset": 0, 00:13:33.709 "data_size": 65536 00:13:33.709 }, 00:13:33.709 { 00:13:33.709 "name": null, 00:13:33.709 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:33.709 "is_configured": false, 00:13:33.709 "data_offset": 0, 00:13:33.709 "data_size": 65536 00:13:33.709 }, 00:13:33.709 { 00:13:33.709 "name": "BaseBdev3", 00:13:33.709 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:33.709 "is_configured": true, 00:13:33.709 "data_offset": 0, 00:13:33.709 "data_size": 65536 00:13:33.709 } 00:13:33.709 ] 00:13:33.709 }' 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.709 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.275 [2024-11-27 14:13:04.583125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.275 "name": "Existed_Raid", 00:13:34.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.275 "strip_size_kb": 0, 00:13:34.275 "state": "configuring", 00:13:34.275 "raid_level": "raid1", 00:13:34.275 "superblock": false, 00:13:34.275 "num_base_bdevs": 3, 00:13:34.275 "num_base_bdevs_discovered": 2, 00:13:34.275 "num_base_bdevs_operational": 3, 00:13:34.275 "base_bdevs_list": [ 00:13:34.275 { 00:13:34.275 "name": null, 00:13:34.275 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:34.275 "is_configured": false, 00:13:34.275 "data_offset": 0, 00:13:34.275 "data_size": 65536 00:13:34.275 }, 00:13:34.275 { 00:13:34.275 "name": "BaseBdev2", 00:13:34.275 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:34.275 "is_configured": true, 00:13:34.275 "data_offset": 0, 00:13:34.275 "data_size": 65536 00:13:34.275 }, 00:13:34.275 { 00:13:34.275 "name": "BaseBdev3", 00:13:34.275 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:34.275 "is_configured": true, 00:13:34.275 "data_offset": 0, 00:13:34.275 "data_size": 65536 00:13:34.275 } 00:13:34.275 ] 00:13:34.275 }' 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.275 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ad8999cb-f7ab-45a9-a262-0584446844a7 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 [2024-11-27 14:13:05.241243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:34.842 [2024-11-27 14:13:05.241529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:34.842 [2024-11-27 14:13:05.241553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:34.842 [2024-11-27 14:13:05.241903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:34.842 [2024-11-27 14:13:05.242126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:34.842 [2024-11-27 14:13:05.242149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:34.842 [2024-11-27 14:13:05.242452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.842 NewBaseBdev 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 [ 00:13:34.842 { 00:13:34.842 "name": "NewBaseBdev", 00:13:34.842 "aliases": [ 00:13:34.842 "ad8999cb-f7ab-45a9-a262-0584446844a7" 00:13:34.842 ], 00:13:34.842 "product_name": "Malloc disk", 00:13:34.842 "block_size": 512, 00:13:34.842 "num_blocks": 65536, 00:13:34.842 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:34.842 "assigned_rate_limits": { 00:13:34.842 "rw_ios_per_sec": 0, 00:13:34.842 "rw_mbytes_per_sec": 0, 00:13:34.842 "r_mbytes_per_sec": 0, 00:13:34.842 "w_mbytes_per_sec": 0 00:13:34.842 }, 00:13:34.842 "claimed": true, 00:13:34.842 "claim_type": "exclusive_write", 00:13:34.842 "zoned": false, 00:13:34.842 "supported_io_types": { 00:13:34.842 "read": true, 00:13:34.842 "write": true, 00:13:34.842 "unmap": true, 00:13:34.842 "flush": true, 00:13:34.842 "reset": true, 00:13:34.842 "nvme_admin": false, 00:13:34.842 "nvme_io": false, 00:13:34.842 "nvme_io_md": false, 00:13:34.842 "write_zeroes": true, 00:13:34.842 "zcopy": true, 00:13:34.842 "get_zone_info": false, 00:13:34.842 "zone_management": false, 00:13:34.842 "zone_append": false, 00:13:34.842 "compare": false, 00:13:34.842 "compare_and_write": false, 00:13:34.842 "abort": true, 00:13:34.842 "seek_hole": false, 00:13:34.842 "seek_data": false, 00:13:34.842 "copy": true, 00:13:34.842 "nvme_iov_md": false 00:13:34.842 }, 00:13:34.842 "memory_domains": [ 00:13:34.842 { 00:13:34.842 "dma_device_id": "system", 00:13:34.842 "dma_device_type": 1 00:13:34.842 }, 00:13:34.842 { 00:13:34.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.842 "dma_device_type": 2 00:13:34.842 } 00:13:34.842 ], 00:13:34.842 "driver_specific": {} 00:13:34.842 } 00:13:34.842 ] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.842 "name": "Existed_Raid", 00:13:34.842 "uuid": "c5f052f4-b0cc-4ec8-b116-f2aaf7ee14b9", 00:13:34.842 "strip_size_kb": 0, 00:13:34.842 "state": "online", 00:13:34.842 "raid_level": "raid1", 00:13:34.842 "superblock": false, 00:13:34.842 "num_base_bdevs": 3, 00:13:34.842 "num_base_bdevs_discovered": 3, 00:13:34.842 "num_base_bdevs_operational": 3, 00:13:34.842 "base_bdevs_list": [ 00:13:34.842 { 00:13:34.842 "name": "NewBaseBdev", 00:13:34.842 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:34.842 "is_configured": true, 00:13:34.842 "data_offset": 0, 00:13:34.842 "data_size": 65536 00:13:34.842 }, 00:13:34.842 { 00:13:34.842 "name": "BaseBdev2", 00:13:34.842 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:34.842 "is_configured": true, 00:13:34.842 "data_offset": 0, 00:13:34.842 "data_size": 65536 00:13:34.842 }, 00:13:34.842 { 00:13:34.842 "name": "BaseBdev3", 00:13:34.842 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:34.842 "is_configured": true, 00:13:34.842 "data_offset": 0, 00:13:34.842 "data_size": 65536 00:13:34.842 } 00:13:34.842 ] 00:13:34.842 }' 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.842 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.409 [2024-11-27 14:13:05.761839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.409 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.409 "name": "Existed_Raid", 00:13:35.409 "aliases": [ 00:13:35.409 "c5f052f4-b0cc-4ec8-b116-f2aaf7ee14b9" 00:13:35.409 ], 00:13:35.409 "product_name": "Raid Volume", 00:13:35.409 "block_size": 512, 00:13:35.409 "num_blocks": 65536, 00:13:35.409 "uuid": "c5f052f4-b0cc-4ec8-b116-f2aaf7ee14b9", 00:13:35.409 "assigned_rate_limits": { 00:13:35.409 "rw_ios_per_sec": 0, 00:13:35.409 "rw_mbytes_per_sec": 0, 00:13:35.409 "r_mbytes_per_sec": 0, 00:13:35.409 "w_mbytes_per_sec": 0 00:13:35.409 }, 00:13:35.409 "claimed": false, 00:13:35.409 "zoned": false, 00:13:35.409 "supported_io_types": { 00:13:35.409 "read": true, 00:13:35.409 "write": true, 00:13:35.409 "unmap": false, 00:13:35.409 "flush": false, 00:13:35.409 "reset": true, 00:13:35.409 "nvme_admin": false, 00:13:35.409 "nvme_io": false, 00:13:35.409 "nvme_io_md": false, 00:13:35.409 "write_zeroes": true, 00:13:35.409 "zcopy": false, 00:13:35.409 "get_zone_info": false, 00:13:35.409 "zone_management": false, 00:13:35.409 "zone_append": false, 00:13:35.409 "compare": false, 00:13:35.409 "compare_and_write": false, 00:13:35.409 "abort": false, 00:13:35.409 "seek_hole": false, 00:13:35.409 "seek_data": false, 00:13:35.409 "copy": false, 00:13:35.409 "nvme_iov_md": false 00:13:35.409 }, 00:13:35.409 "memory_domains": [ 00:13:35.409 { 00:13:35.409 "dma_device_id": "system", 00:13:35.409 "dma_device_type": 1 00:13:35.409 }, 00:13:35.409 { 00:13:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.409 "dma_device_type": 2 00:13:35.409 }, 00:13:35.409 { 00:13:35.409 "dma_device_id": "system", 00:13:35.409 "dma_device_type": 1 00:13:35.409 }, 00:13:35.409 { 00:13:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.409 "dma_device_type": 2 00:13:35.409 }, 00:13:35.409 { 00:13:35.409 "dma_device_id": "system", 00:13:35.409 "dma_device_type": 1 00:13:35.409 }, 00:13:35.410 { 00:13:35.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.410 "dma_device_type": 2 00:13:35.410 } 00:13:35.410 ], 00:13:35.410 "driver_specific": { 00:13:35.410 "raid": { 00:13:35.410 "uuid": "c5f052f4-b0cc-4ec8-b116-f2aaf7ee14b9", 00:13:35.410 "strip_size_kb": 0, 00:13:35.410 "state": "online", 00:13:35.410 "raid_level": "raid1", 00:13:35.410 "superblock": false, 00:13:35.410 "num_base_bdevs": 3, 00:13:35.410 "num_base_bdevs_discovered": 3, 00:13:35.410 "num_base_bdevs_operational": 3, 00:13:35.410 "base_bdevs_list": [ 00:13:35.410 { 00:13:35.410 "name": "NewBaseBdev", 00:13:35.410 "uuid": "ad8999cb-f7ab-45a9-a262-0584446844a7", 00:13:35.410 "is_configured": true, 00:13:35.410 "data_offset": 0, 00:13:35.410 "data_size": 65536 00:13:35.410 }, 00:13:35.410 { 00:13:35.410 "name": "BaseBdev2", 00:13:35.410 "uuid": "9edefdc0-b064-499a-93ab-55dba8cdc1d6", 00:13:35.410 "is_configured": true, 00:13:35.410 "data_offset": 0, 00:13:35.410 "data_size": 65536 00:13:35.410 }, 00:13:35.410 { 00:13:35.410 "name": "BaseBdev3", 00:13:35.410 "uuid": "6d883653-6036-4b7c-b8f5-0a9a28f6a656", 00:13:35.410 "is_configured": true, 00:13:35.410 "data_offset": 0, 00:13:35.410 "data_size": 65536 00:13:35.410 } 00:13:35.410 ] 00:13:35.410 } 00:13:35.410 } 00:13:35.410 }' 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:35.410 BaseBdev2 00:13:35.410 BaseBdev3' 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.410 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.667 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.667 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.668 [2024-11-27 14:13:06.097512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.668 [2024-11-27 14:13:06.097555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.668 [2024-11-27 14:13:06.097647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.668 [2024-11-27 14:13:06.098051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.668 [2024-11-27 14:13:06.098072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67621 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67621 ']' 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67621 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67621 00:13:35.668 killing process with pid 67621 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67621' 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67621 00:13:35.668 [2024-11-27 14:13:06.137102] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.668 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67621 00:13:35.925 [2024-11-27 14:13:06.403505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.300 ************************************ 00:13:37.300 END TEST raid_state_function_test 00:13:37.300 ************************************ 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:37.300 00:13:37.300 real 0m11.748s 00:13:37.300 user 0m19.494s 00:13:37.300 sys 0m1.582s 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.300 14:13:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:37.300 14:13:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:37.300 14:13:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.300 14:13:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.300 ************************************ 00:13:37.300 START TEST raid_state_function_test_sb 00:13:37.300 ************************************ 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:37.300 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:37.301 Process raid pid: 68254 00:13:37.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68254 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68254' 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68254 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68254 ']' 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.301 14:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.301 [2024-11-27 14:13:07.610231] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:37.301 [2024-11-27 14:13:07.610540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.301 [2024-11-27 14:13:07.786912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.559 [2024-11-27 14:13:07.926977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.817 [2024-11-27 14:13:08.154902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.817 [2024-11-27 14:13:08.155577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.382 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.383 [2024-11-27 14:13:08.641315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.383 [2024-11-27 14:13:08.641429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.383 [2024-11-27 14:13:08.641451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.383 [2024-11-27 14:13:08.641472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.383 [2024-11-27 14:13:08.641485] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.383 [2024-11-27 14:13:08.641504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.383 "name": "Existed_Raid", 00:13:38.383 "uuid": "2dc65672-5365-4d98-b66b-64d8a1c88645", 00:13:38.383 "strip_size_kb": 0, 00:13:38.383 "state": "configuring", 00:13:38.383 "raid_level": "raid1", 00:13:38.383 "superblock": true, 00:13:38.383 "num_base_bdevs": 3, 00:13:38.383 "num_base_bdevs_discovered": 0, 00:13:38.383 "num_base_bdevs_operational": 3, 00:13:38.383 "base_bdevs_list": [ 00:13:38.383 { 00:13:38.383 "name": "BaseBdev1", 00:13:38.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.383 "is_configured": false, 00:13:38.383 "data_offset": 0, 00:13:38.383 "data_size": 0 00:13:38.383 }, 00:13:38.383 { 00:13:38.383 "name": "BaseBdev2", 00:13:38.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.383 "is_configured": false, 00:13:38.383 "data_offset": 0, 00:13:38.383 "data_size": 0 00:13:38.383 }, 00:13:38.383 { 00:13:38.383 "name": "BaseBdev3", 00:13:38.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.383 "is_configured": false, 00:13:38.383 "data_offset": 0, 00:13:38.383 "data_size": 0 00:13:38.383 } 00:13:38.383 ] 00:13:38.383 }' 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.383 14:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.641 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:38.641 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.641 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.898 [2024-11-27 14:13:09.153455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:38.898 [2024-11-27 14:13:09.153545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.898 [2024-11-27 14:13:09.165359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.898 [2024-11-27 14:13:09.165437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.898 [2024-11-27 14:13:09.165456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.898 [2024-11-27 14:13:09.165475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.898 [2024-11-27 14:13:09.165487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.898 [2024-11-27 14:13:09.165504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.898 [2024-11-27 14:13:09.212617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.898 BaseBdev1 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.898 [ 00:13:38.898 { 00:13:38.898 "name": "BaseBdev1", 00:13:38.898 "aliases": [ 00:13:38.898 "1196f183-1dd4-4564-b5c0-a20b5f72e408" 00:13:38.898 ], 00:13:38.898 "product_name": "Malloc disk", 00:13:38.898 "block_size": 512, 00:13:38.898 "num_blocks": 65536, 00:13:38.898 "uuid": "1196f183-1dd4-4564-b5c0-a20b5f72e408", 00:13:38.898 "assigned_rate_limits": { 00:13:38.898 "rw_ios_per_sec": 0, 00:13:38.898 "rw_mbytes_per_sec": 0, 00:13:38.898 "r_mbytes_per_sec": 0, 00:13:38.898 "w_mbytes_per_sec": 0 00:13:38.898 }, 00:13:38.898 "claimed": true, 00:13:38.898 "claim_type": "exclusive_write", 00:13:38.898 "zoned": false, 00:13:38.898 "supported_io_types": { 00:13:38.898 "read": true, 00:13:38.898 "write": true, 00:13:38.898 "unmap": true, 00:13:38.898 "flush": true, 00:13:38.898 "reset": true, 00:13:38.898 "nvme_admin": false, 00:13:38.898 "nvme_io": false, 00:13:38.898 "nvme_io_md": false, 00:13:38.898 "write_zeroes": true, 00:13:38.898 "zcopy": true, 00:13:38.898 "get_zone_info": false, 00:13:38.898 "zone_management": false, 00:13:38.898 "zone_append": false, 00:13:38.898 "compare": false, 00:13:38.898 "compare_and_write": false, 00:13:38.898 "abort": true, 00:13:38.898 "seek_hole": false, 00:13:38.898 "seek_data": false, 00:13:38.898 "copy": true, 00:13:38.898 "nvme_iov_md": false 00:13:38.898 }, 00:13:38.898 "memory_domains": [ 00:13:38.898 { 00:13:38.898 "dma_device_id": "system", 00:13:38.898 "dma_device_type": 1 00:13:38.898 }, 00:13:38.898 { 00:13:38.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.898 "dma_device_type": 2 00:13:38.898 } 00:13:38.898 ], 00:13:38.898 "driver_specific": {} 00:13:38.898 } 00:13:38.898 ] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.898 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.899 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.899 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.899 "name": "Existed_Raid", 00:13:38.899 "uuid": "69b37317-c506-4711-aac0-f9a33647f75f", 00:13:38.899 "strip_size_kb": 0, 00:13:38.899 "state": "configuring", 00:13:38.899 "raid_level": "raid1", 00:13:38.899 "superblock": true, 00:13:38.899 "num_base_bdevs": 3, 00:13:38.899 "num_base_bdevs_discovered": 1, 00:13:38.899 "num_base_bdevs_operational": 3, 00:13:38.899 "base_bdevs_list": [ 00:13:38.899 { 00:13:38.899 "name": "BaseBdev1", 00:13:38.899 "uuid": "1196f183-1dd4-4564-b5c0-a20b5f72e408", 00:13:38.899 "is_configured": true, 00:13:38.899 "data_offset": 2048, 00:13:38.899 "data_size": 63488 00:13:38.899 }, 00:13:38.899 { 00:13:38.899 "name": "BaseBdev2", 00:13:38.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.899 "is_configured": false, 00:13:38.899 "data_offset": 0, 00:13:38.899 "data_size": 0 00:13:38.899 }, 00:13:38.899 { 00:13:38.899 "name": "BaseBdev3", 00:13:38.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.899 "is_configured": false, 00:13:38.899 "data_offset": 0, 00:13:38.899 "data_size": 0 00:13:38.899 } 00:13:38.899 ] 00:13:38.899 }' 00:13:38.899 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.899 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 [2024-11-27 14:13:09.796906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.464 [2024-11-27 14:13:09.797313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 [2024-11-27 14:13:09.804880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.464 [2024-11-27 14:13:09.807655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.464 [2024-11-27 14:13:09.807740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.464 [2024-11-27 14:13:09.807761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.464 [2024-11-27 14:13:09.807797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.464 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.465 "name": "Existed_Raid", 00:13:39.465 "uuid": "7b5372e2-2123-4e0a-ae6a-ebf0f168c866", 00:13:39.465 "strip_size_kb": 0, 00:13:39.465 "state": "configuring", 00:13:39.465 "raid_level": "raid1", 00:13:39.465 "superblock": true, 00:13:39.465 "num_base_bdevs": 3, 00:13:39.465 "num_base_bdevs_discovered": 1, 00:13:39.465 "num_base_bdevs_operational": 3, 00:13:39.465 "base_bdevs_list": [ 00:13:39.465 { 00:13:39.465 "name": "BaseBdev1", 00:13:39.465 "uuid": "1196f183-1dd4-4564-b5c0-a20b5f72e408", 00:13:39.465 "is_configured": true, 00:13:39.465 "data_offset": 2048, 00:13:39.465 "data_size": 63488 00:13:39.465 }, 00:13:39.465 { 00:13:39.465 "name": "BaseBdev2", 00:13:39.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.465 "is_configured": false, 00:13:39.465 "data_offset": 0, 00:13:39.465 "data_size": 0 00:13:39.465 }, 00:13:39.465 { 00:13:39.465 "name": "BaseBdev3", 00:13:39.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.465 "is_configured": false, 00:13:39.465 "data_offset": 0, 00:13:39.465 "data_size": 0 00:13:39.465 } 00:13:39.465 ] 00:13:39.465 }' 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.465 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.031 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 [2024-11-27 14:13:10.367349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.032 BaseBdev2 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 [ 00:13:40.032 { 00:13:40.032 "name": "BaseBdev2", 00:13:40.032 "aliases": [ 00:13:40.032 "1fe9ec2b-b3de-427f-9ec2-8c0f21942216" 00:13:40.032 ], 00:13:40.032 "product_name": "Malloc disk", 00:13:40.032 "block_size": 512, 00:13:40.032 "num_blocks": 65536, 00:13:40.032 "uuid": "1fe9ec2b-b3de-427f-9ec2-8c0f21942216", 00:13:40.032 "assigned_rate_limits": { 00:13:40.032 "rw_ios_per_sec": 0, 00:13:40.032 "rw_mbytes_per_sec": 0, 00:13:40.032 "r_mbytes_per_sec": 0, 00:13:40.032 "w_mbytes_per_sec": 0 00:13:40.032 }, 00:13:40.032 "claimed": true, 00:13:40.032 "claim_type": "exclusive_write", 00:13:40.032 "zoned": false, 00:13:40.032 "supported_io_types": { 00:13:40.032 "read": true, 00:13:40.032 "write": true, 00:13:40.032 "unmap": true, 00:13:40.032 "flush": true, 00:13:40.032 "reset": true, 00:13:40.032 "nvme_admin": false, 00:13:40.032 "nvme_io": false, 00:13:40.032 "nvme_io_md": false, 00:13:40.032 "write_zeroes": true, 00:13:40.032 "zcopy": true, 00:13:40.032 "get_zone_info": false, 00:13:40.032 "zone_management": false, 00:13:40.032 "zone_append": false, 00:13:40.032 "compare": false, 00:13:40.032 "compare_and_write": false, 00:13:40.032 "abort": true, 00:13:40.032 "seek_hole": false, 00:13:40.032 "seek_data": false, 00:13:40.032 "copy": true, 00:13:40.032 "nvme_iov_md": false 00:13:40.032 }, 00:13:40.032 "memory_domains": [ 00:13:40.032 { 00:13:40.032 "dma_device_id": "system", 00:13:40.032 "dma_device_type": 1 00:13:40.032 }, 00:13:40.032 { 00:13:40.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.032 "dma_device_type": 2 00:13:40.032 } 00:13:40.032 ], 00:13:40.032 "driver_specific": {} 00:13:40.032 } 00:13:40.032 ] 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.032 "name": "Existed_Raid", 00:13:40.032 "uuid": "7b5372e2-2123-4e0a-ae6a-ebf0f168c866", 00:13:40.032 "strip_size_kb": 0, 00:13:40.032 "state": "configuring", 00:13:40.032 "raid_level": "raid1", 00:13:40.032 "superblock": true, 00:13:40.032 "num_base_bdevs": 3, 00:13:40.032 "num_base_bdevs_discovered": 2, 00:13:40.032 "num_base_bdevs_operational": 3, 00:13:40.032 "base_bdevs_list": [ 00:13:40.032 { 00:13:40.032 "name": "BaseBdev1", 00:13:40.032 "uuid": "1196f183-1dd4-4564-b5c0-a20b5f72e408", 00:13:40.032 "is_configured": true, 00:13:40.032 "data_offset": 2048, 00:13:40.032 "data_size": 63488 00:13:40.032 }, 00:13:40.032 { 00:13:40.032 "name": "BaseBdev2", 00:13:40.032 "uuid": "1fe9ec2b-b3de-427f-9ec2-8c0f21942216", 00:13:40.032 "is_configured": true, 00:13:40.032 "data_offset": 2048, 00:13:40.032 "data_size": 63488 00:13:40.032 }, 00:13:40.032 { 00:13:40.032 "name": "BaseBdev3", 00:13:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.032 "is_configured": false, 00:13:40.032 "data_offset": 0, 00:13:40.032 "data_size": 0 00:13:40.032 } 00:13:40.032 ] 00:13:40.032 }' 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.032 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.599 [2024-11-27 14:13:10.967176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.599 [2024-11-27 14:13:10.967581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:40.599 [2024-11-27 14:13:10.967616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:40.599 BaseBdev3 00:13:40.599 [2024-11-27 14:13:10.968029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:40.599 [2024-11-27 14:13:10.968271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:40.599 [2024-11-27 14:13:10.968401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.599 [2024-11-27 14:13:10.968625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.599 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.599 [ 00:13:40.599 { 00:13:40.599 "name": "BaseBdev3", 00:13:40.599 "aliases": [ 00:13:40.599 "7f910776-8242-4980-bfe0-bf2c732e954f" 00:13:40.599 ], 00:13:40.599 "product_name": "Malloc disk", 00:13:40.599 "block_size": 512, 00:13:40.599 "num_blocks": 65536, 00:13:40.599 "uuid": "7f910776-8242-4980-bfe0-bf2c732e954f", 00:13:40.599 "assigned_rate_limits": { 00:13:40.599 "rw_ios_per_sec": 0, 00:13:40.599 "rw_mbytes_per_sec": 0, 00:13:40.599 "r_mbytes_per_sec": 0, 00:13:40.599 "w_mbytes_per_sec": 0 00:13:40.599 }, 00:13:40.599 "claimed": true, 00:13:40.599 "claim_type": "exclusive_write", 00:13:40.599 "zoned": false, 00:13:40.599 "supported_io_types": { 00:13:40.599 "read": true, 00:13:40.599 "write": true, 00:13:40.599 "unmap": true, 00:13:40.599 "flush": true, 00:13:40.599 "reset": true, 00:13:40.599 "nvme_admin": false, 00:13:40.599 "nvme_io": false, 00:13:40.599 "nvme_io_md": false, 00:13:40.599 "write_zeroes": true, 00:13:40.599 "zcopy": true, 00:13:40.599 "get_zone_info": false, 00:13:40.599 "zone_management": false, 00:13:40.599 "zone_append": false, 00:13:40.599 "compare": false, 00:13:40.599 "compare_and_write": false, 00:13:40.599 "abort": true, 00:13:40.599 "seek_hole": false, 00:13:40.599 "seek_data": false, 00:13:40.599 "copy": true, 00:13:40.599 "nvme_iov_md": false 00:13:40.599 }, 00:13:40.599 "memory_domains": [ 00:13:40.599 { 00:13:40.599 "dma_device_id": "system", 00:13:40.599 "dma_device_type": 1 00:13:40.599 }, 00:13:40.599 { 00:13:40.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.599 "dma_device_type": 2 00:13:40.599 } 00:13:40.599 ], 00:13:40.599 "driver_specific": {} 00:13:40.599 } 00:13:40.599 ] 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.599 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.600 "name": "Existed_Raid", 00:13:40.600 "uuid": "7b5372e2-2123-4e0a-ae6a-ebf0f168c866", 00:13:40.600 "strip_size_kb": 0, 00:13:40.600 "state": "online", 00:13:40.600 "raid_level": "raid1", 00:13:40.600 "superblock": true, 00:13:40.600 "num_base_bdevs": 3, 00:13:40.600 "num_base_bdevs_discovered": 3, 00:13:40.600 "num_base_bdevs_operational": 3, 00:13:40.600 "base_bdevs_list": [ 00:13:40.600 { 00:13:40.600 "name": "BaseBdev1", 00:13:40.600 "uuid": "1196f183-1dd4-4564-b5c0-a20b5f72e408", 00:13:40.600 "is_configured": true, 00:13:40.600 "data_offset": 2048, 00:13:40.600 "data_size": 63488 00:13:40.600 }, 00:13:40.600 { 00:13:40.600 "name": "BaseBdev2", 00:13:40.600 "uuid": "1fe9ec2b-b3de-427f-9ec2-8c0f21942216", 00:13:40.600 "is_configured": true, 00:13:40.600 "data_offset": 2048, 00:13:40.600 "data_size": 63488 00:13:40.600 }, 00:13:40.600 { 00:13:40.600 "name": "BaseBdev3", 00:13:40.600 "uuid": "7f910776-8242-4980-bfe0-bf2c732e954f", 00:13:40.600 "is_configured": true, 00:13:40.600 "data_offset": 2048, 00:13:40.600 "data_size": 63488 00:13:40.600 } 00:13:40.600 ] 00:13:40.600 }' 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.600 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.166 [2024-11-27 14:13:11.527869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.166 "name": "Existed_Raid", 00:13:41.166 "aliases": [ 00:13:41.166 "7b5372e2-2123-4e0a-ae6a-ebf0f168c866" 00:13:41.166 ], 00:13:41.166 "product_name": "Raid Volume", 00:13:41.166 "block_size": 512, 00:13:41.166 "num_blocks": 63488, 00:13:41.166 "uuid": "7b5372e2-2123-4e0a-ae6a-ebf0f168c866", 00:13:41.166 "assigned_rate_limits": { 00:13:41.166 "rw_ios_per_sec": 0, 00:13:41.166 "rw_mbytes_per_sec": 0, 00:13:41.166 "r_mbytes_per_sec": 0, 00:13:41.166 "w_mbytes_per_sec": 0 00:13:41.166 }, 00:13:41.166 "claimed": false, 00:13:41.166 "zoned": false, 00:13:41.166 "supported_io_types": { 00:13:41.166 "read": true, 00:13:41.166 "write": true, 00:13:41.166 "unmap": false, 00:13:41.166 "flush": false, 00:13:41.166 "reset": true, 00:13:41.166 "nvme_admin": false, 00:13:41.166 "nvme_io": false, 00:13:41.166 "nvme_io_md": false, 00:13:41.166 "write_zeroes": true, 00:13:41.166 "zcopy": false, 00:13:41.166 "get_zone_info": false, 00:13:41.166 "zone_management": false, 00:13:41.166 "zone_append": false, 00:13:41.166 "compare": false, 00:13:41.166 "compare_and_write": false, 00:13:41.166 "abort": false, 00:13:41.166 "seek_hole": false, 00:13:41.166 "seek_data": false, 00:13:41.166 "copy": false, 00:13:41.166 "nvme_iov_md": false 00:13:41.166 }, 00:13:41.166 "memory_domains": [ 00:13:41.166 { 00:13:41.166 "dma_device_id": "system", 00:13:41.166 "dma_device_type": 1 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.166 "dma_device_type": 2 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "dma_device_id": "system", 00:13:41.166 "dma_device_type": 1 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.166 "dma_device_type": 2 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "dma_device_id": "system", 00:13:41.166 "dma_device_type": 1 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.166 "dma_device_type": 2 00:13:41.166 } 00:13:41.166 ], 00:13:41.166 "driver_specific": { 00:13:41.166 "raid": { 00:13:41.166 "uuid": "7b5372e2-2123-4e0a-ae6a-ebf0f168c866", 00:13:41.166 "strip_size_kb": 0, 00:13:41.166 "state": "online", 00:13:41.166 "raid_level": "raid1", 00:13:41.166 "superblock": true, 00:13:41.166 "num_base_bdevs": 3, 00:13:41.166 "num_base_bdevs_discovered": 3, 00:13:41.166 "num_base_bdevs_operational": 3, 00:13:41.166 "base_bdevs_list": [ 00:13:41.166 { 00:13:41.166 "name": "BaseBdev1", 00:13:41.166 "uuid": "1196f183-1dd4-4564-b5c0-a20b5f72e408", 00:13:41.166 "is_configured": true, 00:13:41.166 "data_offset": 2048, 00:13:41.166 "data_size": 63488 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "name": "BaseBdev2", 00:13:41.166 "uuid": "1fe9ec2b-b3de-427f-9ec2-8c0f21942216", 00:13:41.166 "is_configured": true, 00:13:41.166 "data_offset": 2048, 00:13:41.166 "data_size": 63488 00:13:41.166 }, 00:13:41.166 { 00:13:41.166 "name": "BaseBdev3", 00:13:41.166 "uuid": "7f910776-8242-4980-bfe0-bf2c732e954f", 00:13:41.166 "is_configured": true, 00:13:41.166 "data_offset": 2048, 00:13:41.166 "data_size": 63488 00:13:41.166 } 00:13:41.166 ] 00:13:41.166 } 00:13:41.166 } 00:13:41.166 }' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:41.166 BaseBdev2 00:13:41.166 BaseBdev3' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.166 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.425 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.426 [2024-11-27 14:13:11.831585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.426 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.684 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.684 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.684 "name": "Existed_Raid", 00:13:41.684 "uuid": "7b5372e2-2123-4e0a-ae6a-ebf0f168c866", 00:13:41.684 "strip_size_kb": 0, 00:13:41.684 "state": "online", 00:13:41.684 "raid_level": "raid1", 00:13:41.684 "superblock": true, 00:13:41.684 "num_base_bdevs": 3, 00:13:41.684 "num_base_bdevs_discovered": 2, 00:13:41.684 "num_base_bdevs_operational": 2, 00:13:41.684 "base_bdevs_list": [ 00:13:41.684 { 00:13:41.684 "name": null, 00:13:41.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.684 "is_configured": false, 00:13:41.684 "data_offset": 0, 00:13:41.684 "data_size": 63488 00:13:41.684 }, 00:13:41.684 { 00:13:41.684 "name": "BaseBdev2", 00:13:41.684 "uuid": "1fe9ec2b-b3de-427f-9ec2-8c0f21942216", 00:13:41.684 "is_configured": true, 00:13:41.684 "data_offset": 2048, 00:13:41.684 "data_size": 63488 00:13:41.684 }, 00:13:41.684 { 00:13:41.684 "name": "BaseBdev3", 00:13:41.684 "uuid": "7f910776-8242-4980-bfe0-bf2c732e954f", 00:13:41.684 "is_configured": true, 00:13:41.684 "data_offset": 2048, 00:13:41.684 "data_size": 63488 00:13:41.684 } 00:13:41.684 ] 00:13:41.684 }' 00:13:41.684 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.684 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.942 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:41.943 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:41.943 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.943 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.943 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.943 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:41.943 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.201 [2024-11-27 14:13:12.485760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.201 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.201 [2024-11-27 14:13:12.652395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.201 [2024-11-27 14:13:12.652601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.460 [2024-11-27 14:13:12.743016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.460 [2024-11-27 14:13:12.743135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.460 [2024-11-27 14:13:12.743165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.460 BaseBdev2 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.460 [ 00:13:42.460 { 00:13:42.460 "name": "BaseBdev2", 00:13:42.460 "aliases": [ 00:13:42.460 "ecf540bb-da20-4a4d-9030-18c9b5ae0554" 00:13:42.460 ], 00:13:42.460 "product_name": "Malloc disk", 00:13:42.460 "block_size": 512, 00:13:42.460 "num_blocks": 65536, 00:13:42.460 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:42.460 "assigned_rate_limits": { 00:13:42.460 "rw_ios_per_sec": 0, 00:13:42.460 "rw_mbytes_per_sec": 0, 00:13:42.460 "r_mbytes_per_sec": 0, 00:13:42.460 "w_mbytes_per_sec": 0 00:13:42.460 }, 00:13:42.460 "claimed": false, 00:13:42.460 "zoned": false, 00:13:42.460 "supported_io_types": { 00:13:42.460 "read": true, 00:13:42.460 "write": true, 00:13:42.460 "unmap": true, 00:13:42.460 "flush": true, 00:13:42.460 "reset": true, 00:13:42.460 "nvme_admin": false, 00:13:42.460 "nvme_io": false, 00:13:42.460 "nvme_io_md": false, 00:13:42.460 "write_zeroes": true, 00:13:42.460 "zcopy": true, 00:13:42.460 "get_zone_info": false, 00:13:42.460 "zone_management": false, 00:13:42.460 "zone_append": false, 00:13:42.460 "compare": false, 00:13:42.460 "compare_and_write": false, 00:13:42.460 "abort": true, 00:13:42.460 "seek_hole": false, 00:13:42.460 "seek_data": false, 00:13:42.460 "copy": true, 00:13:42.460 "nvme_iov_md": false 00:13:42.460 }, 00:13:42.460 "memory_domains": [ 00:13:42.460 { 00:13:42.460 "dma_device_id": "system", 00:13:42.460 "dma_device_type": 1 00:13:42.460 }, 00:13:42.460 { 00:13:42.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.460 "dma_device_type": 2 00:13:42.460 } 00:13:42.460 ], 00:13:42.460 "driver_specific": {} 00:13:42.460 } 00:13:42.460 ] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.460 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.460 BaseBdev3 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.461 [ 00:13:42.461 { 00:13:42.461 "name": "BaseBdev3", 00:13:42.461 "aliases": [ 00:13:42.461 "f3863e2d-fee3-4b9c-9d56-638f6c6936c6" 00:13:42.461 ], 00:13:42.461 "product_name": "Malloc disk", 00:13:42.461 "block_size": 512, 00:13:42.461 "num_blocks": 65536, 00:13:42.461 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:42.461 "assigned_rate_limits": { 00:13:42.461 "rw_ios_per_sec": 0, 00:13:42.461 "rw_mbytes_per_sec": 0, 00:13:42.461 "r_mbytes_per_sec": 0, 00:13:42.461 "w_mbytes_per_sec": 0 00:13:42.461 }, 00:13:42.461 "claimed": false, 00:13:42.461 "zoned": false, 00:13:42.461 "supported_io_types": { 00:13:42.461 "read": true, 00:13:42.461 "write": true, 00:13:42.461 "unmap": true, 00:13:42.461 "flush": true, 00:13:42.461 "reset": true, 00:13:42.461 "nvme_admin": false, 00:13:42.461 "nvme_io": false, 00:13:42.461 "nvme_io_md": false, 00:13:42.461 "write_zeroes": true, 00:13:42.461 "zcopy": true, 00:13:42.461 "get_zone_info": false, 00:13:42.461 "zone_management": false, 00:13:42.461 "zone_append": false, 00:13:42.461 "compare": false, 00:13:42.461 "compare_and_write": false, 00:13:42.461 "abort": true, 00:13:42.461 "seek_hole": false, 00:13:42.461 "seek_data": false, 00:13:42.461 "copy": true, 00:13:42.461 "nvme_iov_md": false 00:13:42.461 }, 00:13:42.461 "memory_domains": [ 00:13:42.461 { 00:13:42.461 "dma_device_id": "system", 00:13:42.461 "dma_device_type": 1 00:13:42.461 }, 00:13:42.461 { 00:13:42.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.461 "dma_device_type": 2 00:13:42.461 } 00:13:42.461 ], 00:13:42.461 "driver_specific": {} 00:13:42.461 } 00:13:42.461 ] 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.461 [2024-11-27 14:13:12.957482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.461 [2024-11-27 14:13:12.957598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.461 [2024-11-27 14:13:12.957642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.461 [2024-11-27 14:13:12.960473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.461 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.719 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.719 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.719 "name": "Existed_Raid", 00:13:42.719 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:42.719 "strip_size_kb": 0, 00:13:42.719 "state": "configuring", 00:13:42.719 "raid_level": "raid1", 00:13:42.719 "superblock": true, 00:13:42.719 "num_base_bdevs": 3, 00:13:42.719 "num_base_bdevs_discovered": 2, 00:13:42.719 "num_base_bdevs_operational": 3, 00:13:42.719 "base_bdevs_list": [ 00:13:42.719 { 00:13:42.719 "name": "BaseBdev1", 00:13:42.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.719 "is_configured": false, 00:13:42.719 "data_offset": 0, 00:13:42.719 "data_size": 0 00:13:42.719 }, 00:13:42.719 { 00:13:42.719 "name": "BaseBdev2", 00:13:42.719 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:42.719 "is_configured": true, 00:13:42.719 "data_offset": 2048, 00:13:42.719 "data_size": 63488 00:13:42.719 }, 00:13:42.719 { 00:13:42.719 "name": "BaseBdev3", 00:13:42.719 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:42.719 "is_configured": true, 00:13:42.719 "data_offset": 2048, 00:13:42.719 "data_size": 63488 00:13:42.719 } 00:13:42.719 ] 00:13:42.719 }' 00:13:42.719 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.719 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 [2024-11-27 14:13:13.469629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.034 "name": "Existed_Raid", 00:13:43.034 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:43.034 "strip_size_kb": 0, 00:13:43.034 "state": "configuring", 00:13:43.034 "raid_level": "raid1", 00:13:43.034 "superblock": true, 00:13:43.034 "num_base_bdevs": 3, 00:13:43.034 "num_base_bdevs_discovered": 1, 00:13:43.034 "num_base_bdevs_operational": 3, 00:13:43.034 "base_bdevs_list": [ 00:13:43.034 { 00:13:43.034 "name": "BaseBdev1", 00:13:43.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.034 "is_configured": false, 00:13:43.034 "data_offset": 0, 00:13:43.034 "data_size": 0 00:13:43.034 }, 00:13:43.034 { 00:13:43.034 "name": null, 00:13:43.034 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:43.034 "is_configured": false, 00:13:43.034 "data_offset": 0, 00:13:43.034 "data_size": 63488 00:13:43.034 }, 00:13:43.034 { 00:13:43.034 "name": "BaseBdev3", 00:13:43.034 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:43.034 "is_configured": true, 00:13:43.034 "data_offset": 2048, 00:13:43.034 "data_size": 63488 00:13:43.034 } 00:13:43.034 ] 00:13:43.034 }' 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.034 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.601 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.601 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 [2024-11-27 14:13:14.095376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.601 BaseBdev1 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.601 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.860 [ 00:13:43.860 { 00:13:43.860 "name": "BaseBdev1", 00:13:43.860 "aliases": [ 00:13:43.860 "0954cd29-670b-4b83-a09d-bd5b8d39a460" 00:13:43.860 ], 00:13:43.860 "product_name": "Malloc disk", 00:13:43.860 "block_size": 512, 00:13:43.860 "num_blocks": 65536, 00:13:43.860 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:43.860 "assigned_rate_limits": { 00:13:43.860 "rw_ios_per_sec": 0, 00:13:43.860 "rw_mbytes_per_sec": 0, 00:13:43.860 "r_mbytes_per_sec": 0, 00:13:43.860 "w_mbytes_per_sec": 0 00:13:43.860 }, 00:13:43.860 "claimed": true, 00:13:43.860 "claim_type": "exclusive_write", 00:13:43.860 "zoned": false, 00:13:43.860 "supported_io_types": { 00:13:43.860 "read": true, 00:13:43.860 "write": true, 00:13:43.860 "unmap": true, 00:13:43.860 "flush": true, 00:13:43.860 "reset": true, 00:13:43.860 "nvme_admin": false, 00:13:43.860 "nvme_io": false, 00:13:43.860 "nvme_io_md": false, 00:13:43.860 "write_zeroes": true, 00:13:43.860 "zcopy": true, 00:13:43.860 "get_zone_info": false, 00:13:43.860 "zone_management": false, 00:13:43.860 "zone_append": false, 00:13:43.860 "compare": false, 00:13:43.860 "compare_and_write": false, 00:13:43.860 "abort": true, 00:13:43.860 "seek_hole": false, 00:13:43.860 "seek_data": false, 00:13:43.860 "copy": true, 00:13:43.860 "nvme_iov_md": false 00:13:43.860 }, 00:13:43.860 "memory_domains": [ 00:13:43.860 { 00:13:43.860 "dma_device_id": "system", 00:13:43.860 "dma_device_type": 1 00:13:43.860 }, 00:13:43.860 { 00:13:43.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.860 "dma_device_type": 2 00:13:43.860 } 00:13:43.860 ], 00:13:43.860 "driver_specific": {} 00:13:43.860 } 00:13:43.860 ] 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.860 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.860 "name": "Existed_Raid", 00:13:43.860 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:43.860 "strip_size_kb": 0, 00:13:43.860 "state": "configuring", 00:13:43.860 "raid_level": "raid1", 00:13:43.860 "superblock": true, 00:13:43.860 "num_base_bdevs": 3, 00:13:43.860 "num_base_bdevs_discovered": 2, 00:13:43.860 "num_base_bdevs_operational": 3, 00:13:43.860 "base_bdevs_list": [ 00:13:43.860 { 00:13:43.860 "name": "BaseBdev1", 00:13:43.860 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:43.860 "is_configured": true, 00:13:43.860 "data_offset": 2048, 00:13:43.860 "data_size": 63488 00:13:43.860 }, 00:13:43.860 { 00:13:43.860 "name": null, 00:13:43.860 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:43.860 "is_configured": false, 00:13:43.860 "data_offset": 0, 00:13:43.860 "data_size": 63488 00:13:43.860 }, 00:13:43.860 { 00:13:43.860 "name": "BaseBdev3", 00:13:43.860 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:43.860 "is_configured": true, 00:13:43.860 "data_offset": 2048, 00:13:43.860 "data_size": 63488 00:13:43.860 } 00:13:43.860 ] 00:13:43.860 }' 00:13:43.861 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.861 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.121 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.121 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:44.121 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.121 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.380 [2024-11-27 14:13:14.683658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.380 "name": "Existed_Raid", 00:13:44.380 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:44.380 "strip_size_kb": 0, 00:13:44.380 "state": "configuring", 00:13:44.380 "raid_level": "raid1", 00:13:44.380 "superblock": true, 00:13:44.380 "num_base_bdevs": 3, 00:13:44.380 "num_base_bdevs_discovered": 1, 00:13:44.380 "num_base_bdevs_operational": 3, 00:13:44.380 "base_bdevs_list": [ 00:13:44.380 { 00:13:44.380 "name": "BaseBdev1", 00:13:44.380 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:44.380 "is_configured": true, 00:13:44.380 "data_offset": 2048, 00:13:44.380 "data_size": 63488 00:13:44.380 }, 00:13:44.380 { 00:13:44.380 "name": null, 00:13:44.380 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:44.380 "is_configured": false, 00:13:44.380 "data_offset": 0, 00:13:44.380 "data_size": 63488 00:13:44.380 }, 00:13:44.380 { 00:13:44.380 "name": null, 00:13:44.380 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:44.380 "is_configured": false, 00:13:44.380 "data_offset": 0, 00:13:44.380 "data_size": 63488 00:13:44.380 } 00:13:44.380 ] 00:13:44.380 }' 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.380 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 [2024-11-27 14:13:15.235821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.948 "name": "Existed_Raid", 00:13:44.948 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:44.948 "strip_size_kb": 0, 00:13:44.948 "state": "configuring", 00:13:44.948 "raid_level": "raid1", 00:13:44.948 "superblock": true, 00:13:44.948 "num_base_bdevs": 3, 00:13:44.948 "num_base_bdevs_discovered": 2, 00:13:44.948 "num_base_bdevs_operational": 3, 00:13:44.948 "base_bdevs_list": [ 00:13:44.948 { 00:13:44.948 "name": "BaseBdev1", 00:13:44.948 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:44.948 "is_configured": true, 00:13:44.948 "data_offset": 2048, 00:13:44.948 "data_size": 63488 00:13:44.948 }, 00:13:44.948 { 00:13:44.948 "name": null, 00:13:44.948 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:44.948 "is_configured": false, 00:13:44.948 "data_offset": 0, 00:13:44.948 "data_size": 63488 00:13:44.948 }, 00:13:44.948 { 00:13:44.948 "name": "BaseBdev3", 00:13:44.948 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:44.948 "is_configured": true, 00:13:44.948 "data_offset": 2048, 00:13:44.948 "data_size": 63488 00:13:44.948 } 00:13:44.948 ] 00:13:44.948 }' 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.948 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.515 [2024-11-27 14:13:15.804010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.515 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.516 "name": "Existed_Raid", 00:13:45.516 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:45.516 "strip_size_kb": 0, 00:13:45.516 "state": "configuring", 00:13:45.516 "raid_level": "raid1", 00:13:45.516 "superblock": true, 00:13:45.516 "num_base_bdevs": 3, 00:13:45.516 "num_base_bdevs_discovered": 1, 00:13:45.516 "num_base_bdevs_operational": 3, 00:13:45.516 "base_bdevs_list": [ 00:13:45.516 { 00:13:45.516 "name": null, 00:13:45.516 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:45.516 "is_configured": false, 00:13:45.516 "data_offset": 0, 00:13:45.516 "data_size": 63488 00:13:45.516 }, 00:13:45.516 { 00:13:45.516 "name": null, 00:13:45.516 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:45.516 "is_configured": false, 00:13:45.516 "data_offset": 0, 00:13:45.516 "data_size": 63488 00:13:45.516 }, 00:13:45.516 { 00:13:45.516 "name": "BaseBdev3", 00:13:45.516 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:45.516 "is_configured": true, 00:13:45.516 "data_offset": 2048, 00:13:45.516 "data_size": 63488 00:13:45.516 } 00:13:45.516 ] 00:13:45.516 }' 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.516 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.083 [2024-11-27 14:13:16.453565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.083 "name": "Existed_Raid", 00:13:46.083 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:46.083 "strip_size_kb": 0, 00:13:46.083 "state": "configuring", 00:13:46.083 "raid_level": "raid1", 00:13:46.083 "superblock": true, 00:13:46.083 "num_base_bdevs": 3, 00:13:46.083 "num_base_bdevs_discovered": 2, 00:13:46.083 "num_base_bdevs_operational": 3, 00:13:46.083 "base_bdevs_list": [ 00:13:46.083 { 00:13:46.083 "name": null, 00:13:46.083 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:46.083 "is_configured": false, 00:13:46.083 "data_offset": 0, 00:13:46.083 "data_size": 63488 00:13:46.083 }, 00:13:46.083 { 00:13:46.083 "name": "BaseBdev2", 00:13:46.083 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:46.083 "is_configured": true, 00:13:46.083 "data_offset": 2048, 00:13:46.083 "data_size": 63488 00:13:46.083 }, 00:13:46.083 { 00:13:46.083 "name": "BaseBdev3", 00:13:46.083 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:46.083 "is_configured": true, 00:13:46.083 "data_offset": 2048, 00:13:46.083 "data_size": 63488 00:13:46.083 } 00:13:46.083 ] 00:13:46.083 }' 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.083 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.649 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.649 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.649 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.649 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.649 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.649 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0954cd29-670b-4b83-a09d-bd5b8d39a460 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.649 [2024-11-27 14:13:17.091597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:46.649 [2024-11-27 14:13:17.092033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:46.649 [2024-11-27 14:13:17.092056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.649 NewBaseBdev 00:13:46.649 [2024-11-27 14:13:17.092409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:46.649 [2024-11-27 14:13:17.092634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:46.649 [2024-11-27 14:13:17.092660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:46.649 [2024-11-27 14:13:17.092871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.649 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.649 [ 00:13:46.649 { 00:13:46.649 "name": "NewBaseBdev", 00:13:46.649 "aliases": [ 00:13:46.650 "0954cd29-670b-4b83-a09d-bd5b8d39a460" 00:13:46.650 ], 00:13:46.650 "product_name": "Malloc disk", 00:13:46.650 "block_size": 512, 00:13:46.650 "num_blocks": 65536, 00:13:46.650 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:46.650 "assigned_rate_limits": { 00:13:46.650 "rw_ios_per_sec": 0, 00:13:46.650 "rw_mbytes_per_sec": 0, 00:13:46.650 "r_mbytes_per_sec": 0, 00:13:46.650 "w_mbytes_per_sec": 0 00:13:46.650 }, 00:13:46.650 "claimed": true, 00:13:46.650 "claim_type": "exclusive_write", 00:13:46.650 "zoned": false, 00:13:46.650 "supported_io_types": { 00:13:46.650 "read": true, 00:13:46.650 "write": true, 00:13:46.650 "unmap": true, 00:13:46.650 "flush": true, 00:13:46.650 "reset": true, 00:13:46.650 "nvme_admin": false, 00:13:46.650 "nvme_io": false, 00:13:46.650 "nvme_io_md": false, 00:13:46.650 "write_zeroes": true, 00:13:46.650 "zcopy": true, 00:13:46.650 "get_zone_info": false, 00:13:46.650 "zone_management": false, 00:13:46.650 "zone_append": false, 00:13:46.650 "compare": false, 00:13:46.650 "compare_and_write": false, 00:13:46.650 "abort": true, 00:13:46.650 "seek_hole": false, 00:13:46.650 "seek_data": false, 00:13:46.650 "copy": true, 00:13:46.650 "nvme_iov_md": false 00:13:46.650 }, 00:13:46.650 "memory_domains": [ 00:13:46.650 { 00:13:46.650 "dma_device_id": "system", 00:13:46.650 "dma_device_type": 1 00:13:46.650 }, 00:13:46.650 { 00:13:46.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.650 "dma_device_type": 2 00:13:46.650 } 00:13:46.650 ], 00:13:46.650 "driver_specific": {} 00:13:46.650 } 00:13:46.650 ] 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.650 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.908 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.908 "name": "Existed_Raid", 00:13:46.908 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:46.908 "strip_size_kb": 0, 00:13:46.908 "state": "online", 00:13:46.908 "raid_level": "raid1", 00:13:46.908 "superblock": true, 00:13:46.908 "num_base_bdevs": 3, 00:13:46.908 "num_base_bdevs_discovered": 3, 00:13:46.908 "num_base_bdevs_operational": 3, 00:13:46.908 "base_bdevs_list": [ 00:13:46.908 { 00:13:46.908 "name": "NewBaseBdev", 00:13:46.908 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:46.908 "is_configured": true, 00:13:46.908 "data_offset": 2048, 00:13:46.908 "data_size": 63488 00:13:46.908 }, 00:13:46.908 { 00:13:46.908 "name": "BaseBdev2", 00:13:46.908 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:46.908 "is_configured": true, 00:13:46.908 "data_offset": 2048, 00:13:46.908 "data_size": 63488 00:13:46.908 }, 00:13:46.908 { 00:13:46.908 "name": "BaseBdev3", 00:13:46.908 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:46.908 "is_configured": true, 00:13:46.908 "data_offset": 2048, 00:13:46.908 "data_size": 63488 00:13:46.908 } 00:13:46.908 ] 00:13:46.908 }' 00:13:46.908 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.908 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.165 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.165 [2024-11-27 14:13:17.668279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:47.423 "name": "Existed_Raid", 00:13:47.423 "aliases": [ 00:13:47.423 "6db303f7-98bc-4bab-9001-693d6503de75" 00:13:47.423 ], 00:13:47.423 "product_name": "Raid Volume", 00:13:47.423 "block_size": 512, 00:13:47.423 "num_blocks": 63488, 00:13:47.423 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:47.423 "assigned_rate_limits": { 00:13:47.423 "rw_ios_per_sec": 0, 00:13:47.423 "rw_mbytes_per_sec": 0, 00:13:47.423 "r_mbytes_per_sec": 0, 00:13:47.423 "w_mbytes_per_sec": 0 00:13:47.423 }, 00:13:47.423 "claimed": false, 00:13:47.423 "zoned": false, 00:13:47.423 "supported_io_types": { 00:13:47.423 "read": true, 00:13:47.423 "write": true, 00:13:47.423 "unmap": false, 00:13:47.423 "flush": false, 00:13:47.423 "reset": true, 00:13:47.423 "nvme_admin": false, 00:13:47.423 "nvme_io": false, 00:13:47.423 "nvme_io_md": false, 00:13:47.423 "write_zeroes": true, 00:13:47.423 "zcopy": false, 00:13:47.423 "get_zone_info": false, 00:13:47.423 "zone_management": false, 00:13:47.423 "zone_append": false, 00:13:47.423 "compare": false, 00:13:47.423 "compare_and_write": false, 00:13:47.423 "abort": false, 00:13:47.423 "seek_hole": false, 00:13:47.423 "seek_data": false, 00:13:47.423 "copy": false, 00:13:47.423 "nvme_iov_md": false 00:13:47.423 }, 00:13:47.423 "memory_domains": [ 00:13:47.423 { 00:13:47.423 "dma_device_id": "system", 00:13:47.423 "dma_device_type": 1 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.423 "dma_device_type": 2 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "dma_device_id": "system", 00:13:47.423 "dma_device_type": 1 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.423 "dma_device_type": 2 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "dma_device_id": "system", 00:13:47.423 "dma_device_type": 1 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.423 "dma_device_type": 2 00:13:47.423 } 00:13:47.423 ], 00:13:47.423 "driver_specific": { 00:13:47.423 "raid": { 00:13:47.423 "uuid": "6db303f7-98bc-4bab-9001-693d6503de75", 00:13:47.423 "strip_size_kb": 0, 00:13:47.423 "state": "online", 00:13:47.423 "raid_level": "raid1", 00:13:47.423 "superblock": true, 00:13:47.423 "num_base_bdevs": 3, 00:13:47.423 "num_base_bdevs_discovered": 3, 00:13:47.423 "num_base_bdevs_operational": 3, 00:13:47.423 "base_bdevs_list": [ 00:13:47.423 { 00:13:47.423 "name": "NewBaseBdev", 00:13:47.423 "uuid": "0954cd29-670b-4b83-a09d-bd5b8d39a460", 00:13:47.423 "is_configured": true, 00:13:47.423 "data_offset": 2048, 00:13:47.423 "data_size": 63488 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "name": "BaseBdev2", 00:13:47.423 "uuid": "ecf540bb-da20-4a4d-9030-18c9b5ae0554", 00:13:47.423 "is_configured": true, 00:13:47.423 "data_offset": 2048, 00:13:47.423 "data_size": 63488 00:13:47.423 }, 00:13:47.423 { 00:13:47.423 "name": "BaseBdev3", 00:13:47.423 "uuid": "f3863e2d-fee3-4b9c-9d56-638f6c6936c6", 00:13:47.423 "is_configured": true, 00:13:47.423 "data_offset": 2048, 00:13:47.423 "data_size": 63488 00:13:47.423 } 00:13:47.423 ] 00:13:47.423 } 00:13:47.423 } 00:13:47.423 }' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:47.423 BaseBdev2 00:13:47.423 BaseBdev3' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.423 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.681 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.682 [2024-11-27 14:13:17.987975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.682 [2024-11-27 14:13:17.988340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.682 [2024-11-27 14:13:17.988494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.682 [2024-11-27 14:13:17.988954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.682 [2024-11-27 14:13:17.988978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68254 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68254 ']' 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68254 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:47.682 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.682 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68254 00:13:47.682 killing process with pid 68254 00:13:47.682 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.682 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.682 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68254' 00:13:47.682 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68254 00:13:47.682 [2024-11-27 14:13:18.025616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.682 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68254 00:13:47.939 [2024-11-27 14:13:18.315350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.313 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:49.313 00:13:49.313 real 0m11.972s 00:13:49.313 user 0m19.706s 00:13:49.313 sys 0m1.636s 00:13:49.313 ************************************ 00:13:49.313 END TEST raid_state_function_test_sb 00:13:49.313 ************************************ 00:13:49.313 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.313 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.313 14:13:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:49.313 14:13:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:49.313 14:13:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.313 14:13:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.313 ************************************ 00:13:49.313 START TEST raid_superblock_test 00:13:49.313 ************************************ 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68885 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68885 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68885 ']' 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.313 14:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.313 [2024-11-27 14:13:19.656659] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:49.313 [2024-11-27 14:13:19.657073] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68885 ] 00:13:49.571 [2024-11-27 14:13:19.841648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.571 [2024-11-27 14:13:19.990201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.829 [2024-11-27 14:13:20.215794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.829 [2024-11-27 14:13:20.216254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 malloc1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 [2024-11-27 14:13:20.666144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:50.395 [2024-11-27 14:13:20.666611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.395 [2024-11-27 14:13:20.666664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:50.395 [2024-11-27 14:13:20.666686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.395 [2024-11-27 14:13:20.670135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.395 [2024-11-27 14:13:20.670190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:50.395 pt1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 malloc2 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 [2024-11-27 14:13:20.730045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:50.395 [2024-11-27 14:13:20.730125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.395 [2024-11-27 14:13:20.730170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:50.395 [2024-11-27 14:13:20.730189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.395 [2024-11-27 14:13:20.733524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.395 [2024-11-27 14:13:20.733726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:50.395 pt2 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 malloc3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 [2024-11-27 14:13:20.811078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:50.395 [2024-11-27 14:13:20.811305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.395 [2024-11-27 14:13:20.811361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:50.395 [2024-11-27 14:13:20.811381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.395 [2024-11-27 14:13:20.814720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.395 pt3 00:13:50.395 [2024-11-27 14:13:20.814932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 [2024-11-27 14:13:20.823214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:50.395 [2024-11-27 14:13:20.826172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:50.395 [2024-11-27 14:13:20.826443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:50.395 [2024-11-27 14:13:20.826718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:50.395 [2024-11-27 14:13:20.826753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.395 [2024-11-27 14:13:20.827157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:50.395 [2024-11-27 14:13:20.827423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:50.395 [2024-11-27 14:13:20.827447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:50.395 [2024-11-27 14:13:20.827725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.395 "name": "raid_bdev1", 00:13:50.395 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:50.395 "strip_size_kb": 0, 00:13:50.395 "state": "online", 00:13:50.395 "raid_level": "raid1", 00:13:50.395 "superblock": true, 00:13:50.395 "num_base_bdevs": 3, 00:13:50.395 "num_base_bdevs_discovered": 3, 00:13:50.395 "num_base_bdevs_operational": 3, 00:13:50.395 "base_bdevs_list": [ 00:13:50.395 { 00:13:50.395 "name": "pt1", 00:13:50.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.395 "is_configured": true, 00:13:50.395 "data_offset": 2048, 00:13:50.395 "data_size": 63488 00:13:50.395 }, 00:13:50.395 { 00:13:50.395 "name": "pt2", 00:13:50.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.395 "is_configured": true, 00:13:50.395 "data_offset": 2048, 00:13:50.395 "data_size": 63488 00:13:50.395 }, 00:13:50.395 { 00:13:50.395 "name": "pt3", 00:13:50.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:50.395 "is_configured": true, 00:13:50.395 "data_offset": 2048, 00:13:50.395 "data_size": 63488 00:13:50.395 } 00:13:50.395 ] 00:13:50.395 }' 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.395 14:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 [2024-11-27 14:13:21.320219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:50.961 "name": "raid_bdev1", 00:13:50.961 "aliases": [ 00:13:50.961 "4d8ab394-a10f-4915-af9b-27d28cc72ec8" 00:13:50.961 ], 00:13:50.961 "product_name": "Raid Volume", 00:13:50.961 "block_size": 512, 00:13:50.961 "num_blocks": 63488, 00:13:50.961 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:50.961 "assigned_rate_limits": { 00:13:50.961 "rw_ios_per_sec": 0, 00:13:50.961 "rw_mbytes_per_sec": 0, 00:13:50.961 "r_mbytes_per_sec": 0, 00:13:50.961 "w_mbytes_per_sec": 0 00:13:50.961 }, 00:13:50.961 "claimed": false, 00:13:50.961 "zoned": false, 00:13:50.961 "supported_io_types": { 00:13:50.961 "read": true, 00:13:50.961 "write": true, 00:13:50.961 "unmap": false, 00:13:50.961 "flush": false, 00:13:50.961 "reset": true, 00:13:50.961 "nvme_admin": false, 00:13:50.961 "nvme_io": false, 00:13:50.961 "nvme_io_md": false, 00:13:50.961 "write_zeroes": true, 00:13:50.961 "zcopy": false, 00:13:50.961 "get_zone_info": false, 00:13:50.961 "zone_management": false, 00:13:50.961 "zone_append": false, 00:13:50.961 "compare": false, 00:13:50.961 "compare_and_write": false, 00:13:50.961 "abort": false, 00:13:50.961 "seek_hole": false, 00:13:50.961 "seek_data": false, 00:13:50.961 "copy": false, 00:13:50.961 "nvme_iov_md": false 00:13:50.961 }, 00:13:50.961 "memory_domains": [ 00:13:50.961 { 00:13:50.961 "dma_device_id": "system", 00:13:50.961 "dma_device_type": 1 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.961 "dma_device_type": 2 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "dma_device_id": "system", 00:13:50.961 "dma_device_type": 1 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.961 "dma_device_type": 2 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "dma_device_id": "system", 00:13:50.961 "dma_device_type": 1 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.961 "dma_device_type": 2 00:13:50.961 } 00:13:50.961 ], 00:13:50.961 "driver_specific": { 00:13:50.961 "raid": { 00:13:50.961 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:50.961 "strip_size_kb": 0, 00:13:50.961 "state": "online", 00:13:50.961 "raid_level": "raid1", 00:13:50.961 "superblock": true, 00:13:50.961 "num_base_bdevs": 3, 00:13:50.961 "num_base_bdevs_discovered": 3, 00:13:50.961 "num_base_bdevs_operational": 3, 00:13:50.961 "base_bdevs_list": [ 00:13:50.961 { 00:13:50.961 "name": "pt1", 00:13:50.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.961 "is_configured": true, 00:13:50.961 "data_offset": 2048, 00:13:50.961 "data_size": 63488 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "name": "pt2", 00:13:50.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.961 "is_configured": true, 00:13:50.961 "data_offset": 2048, 00:13:50.961 "data_size": 63488 00:13:50.961 }, 00:13:50.961 { 00:13:50.961 "name": "pt3", 00:13:50.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:50.961 "is_configured": true, 00:13:50.961 "data_offset": 2048, 00:13:50.961 "data_size": 63488 00:13:50.961 } 00:13:50.961 ] 00:13:50.961 } 00:13:50.961 } 00:13:50.961 }' 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:50.961 pt2 00:13:50.961 pt3' 00:13:50.961 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.962 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:50.962 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.962 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:50.962 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.962 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.962 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:51.219 [2024-11-27 14:13:21.640263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d8ab394-a10f-4915-af9b-27d28cc72ec8 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d8ab394-a10f-4915-af9b-27d28cc72ec8 ']' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 [2024-11-27 14:13:21.683895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.219 [2024-11-27 14:13:21.683941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.219 [2024-11-27 14:13:21.684044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.219 [2024-11-27 14:13:21.684146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.219 [2024-11-27 14:13:21.684163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.477 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.478 [2024-11-27 14:13:21.827967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:51.478 [2024-11-27 14:13:21.830578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:51.478 [2024-11-27 14:13:21.830777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:51.478 [2024-11-27 14:13:21.830934] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:51.478 [2024-11-27 14:13:21.831167] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:51.478 [2024-11-27 14:13:21.831422] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:51.478 [2024-11-27 14:13:21.831630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.478 [2024-11-27 14:13:21.831677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:51.478 request: 00:13:51.478 { 00:13:51.478 "name": "raid_bdev1", 00:13:51.478 "raid_level": "raid1", 00:13:51.478 "base_bdevs": [ 00:13:51.478 "malloc1", 00:13:51.478 "malloc2", 00:13:51.478 "malloc3" 00:13:51.478 ], 00:13:51.478 "superblock": false, 00:13:51.478 "method": "bdev_raid_create", 00:13:51.478 "req_id": 1 00:13:51.478 } 00:13:51.478 Got JSON-RPC error response 00:13:51.478 response: 00:13:51.478 { 00:13:51.478 "code": -17, 00:13:51.478 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:51.478 } 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.478 [2024-11-27 14:13:21.892096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:51.478 [2024-11-27 14:13:21.892287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.478 [2024-11-27 14:13:21.892362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:51.478 [2024-11-27 14:13:21.892383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.478 [2024-11-27 14:13:21.895249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.478 [2024-11-27 14:13:21.895308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:51.478 [2024-11-27 14:13:21.895409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:51.478 [2024-11-27 14:13:21.895474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:51.478 pt1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.478 "name": "raid_bdev1", 00:13:51.478 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:51.478 "strip_size_kb": 0, 00:13:51.478 "state": "configuring", 00:13:51.478 "raid_level": "raid1", 00:13:51.478 "superblock": true, 00:13:51.478 "num_base_bdevs": 3, 00:13:51.478 "num_base_bdevs_discovered": 1, 00:13:51.478 "num_base_bdevs_operational": 3, 00:13:51.478 "base_bdevs_list": [ 00:13:51.478 { 00:13:51.478 "name": "pt1", 00:13:51.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:51.478 "is_configured": true, 00:13:51.478 "data_offset": 2048, 00:13:51.478 "data_size": 63488 00:13:51.478 }, 00:13:51.478 { 00:13:51.478 "name": null, 00:13:51.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.478 "is_configured": false, 00:13:51.478 "data_offset": 2048, 00:13:51.478 "data_size": 63488 00:13:51.478 }, 00:13:51.478 { 00:13:51.478 "name": null, 00:13:51.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:51.478 "is_configured": false, 00:13:51.478 "data_offset": 2048, 00:13:51.478 "data_size": 63488 00:13:51.478 } 00:13:51.478 ] 00:13:51.478 }' 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.478 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.044 [2024-11-27 14:13:22.432324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.044 [2024-11-27 14:13:22.432434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.044 [2024-11-27 14:13:22.432485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:52.044 [2024-11-27 14:13:22.432500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.044 [2024-11-27 14:13:22.433156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.044 [2024-11-27 14:13:22.433193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.044 [2024-11-27 14:13:22.433305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:52.044 [2024-11-27 14:13:22.433338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.044 pt2 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.044 [2024-11-27 14:13:22.440267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.044 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.044 "name": "raid_bdev1", 00:13:52.044 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:52.044 "strip_size_kb": 0, 00:13:52.044 "state": "configuring", 00:13:52.044 "raid_level": "raid1", 00:13:52.044 "superblock": true, 00:13:52.044 "num_base_bdevs": 3, 00:13:52.044 "num_base_bdevs_discovered": 1, 00:13:52.044 "num_base_bdevs_operational": 3, 00:13:52.044 "base_bdevs_list": [ 00:13:52.044 { 00:13:52.044 "name": "pt1", 00:13:52.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.044 "is_configured": true, 00:13:52.044 "data_offset": 2048, 00:13:52.044 "data_size": 63488 00:13:52.044 }, 00:13:52.044 { 00:13:52.044 "name": null, 00:13:52.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.044 "is_configured": false, 00:13:52.044 "data_offset": 0, 00:13:52.044 "data_size": 63488 00:13:52.044 }, 00:13:52.044 { 00:13:52.044 "name": null, 00:13:52.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.045 "is_configured": false, 00:13:52.045 "data_offset": 2048, 00:13:52.045 "data_size": 63488 00:13:52.045 } 00:13:52.045 ] 00:13:52.045 }' 00:13:52.045 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.045 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.611 [2024-11-27 14:13:22.980398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.611 [2024-11-27 14:13:22.980493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.611 [2024-11-27 14:13:22.980524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:52.611 [2024-11-27 14:13:22.980541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.611 [2024-11-27 14:13:22.981330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.611 [2024-11-27 14:13:22.981367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.611 [2024-11-27 14:13:22.981469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:52.611 [2024-11-27 14:13:22.981519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.611 pt2 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.611 [2024-11-27 14:13:22.988375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:52.611 [2024-11-27 14:13:22.988570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.611 [2024-11-27 14:13:22.988603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:52.611 [2024-11-27 14:13:22.988621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.611 [2024-11-27 14:13:22.989124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.611 [2024-11-27 14:13:22.989168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:52.611 [2024-11-27 14:13:22.989258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:52.611 [2024-11-27 14:13:22.989293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:52.611 [2024-11-27 14:13:22.989450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:52.611 [2024-11-27 14:13:22.989473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.611 [2024-11-27 14:13:22.989787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.611 [2024-11-27 14:13:22.990033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:52.611 [2024-11-27 14:13:22.990049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:52.611 [2024-11-27 14:13:22.990219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.611 pt3 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.611 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.611 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.611 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.611 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.611 "name": "raid_bdev1", 00:13:52.611 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:52.611 "strip_size_kb": 0, 00:13:52.611 "state": "online", 00:13:52.611 "raid_level": "raid1", 00:13:52.611 "superblock": true, 00:13:52.611 "num_base_bdevs": 3, 00:13:52.611 "num_base_bdevs_discovered": 3, 00:13:52.611 "num_base_bdevs_operational": 3, 00:13:52.611 "base_bdevs_list": [ 00:13:52.611 { 00:13:52.611 "name": "pt1", 00:13:52.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.611 "is_configured": true, 00:13:52.611 "data_offset": 2048, 00:13:52.611 "data_size": 63488 00:13:52.611 }, 00:13:52.611 { 00:13:52.611 "name": "pt2", 00:13:52.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.611 "is_configured": true, 00:13:52.611 "data_offset": 2048, 00:13:52.611 "data_size": 63488 00:13:52.611 }, 00:13:52.611 { 00:13:52.611 "name": "pt3", 00:13:52.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.611 "is_configured": true, 00:13:52.611 "data_offset": 2048, 00:13:52.611 "data_size": 63488 00:13:52.611 } 00:13:52.611 ] 00:13:52.611 }' 00:13:52.611 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.611 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.178 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.178 [2024-11-27 14:13:23.516954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.179 "name": "raid_bdev1", 00:13:53.179 "aliases": [ 00:13:53.179 "4d8ab394-a10f-4915-af9b-27d28cc72ec8" 00:13:53.179 ], 00:13:53.179 "product_name": "Raid Volume", 00:13:53.179 "block_size": 512, 00:13:53.179 "num_blocks": 63488, 00:13:53.179 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:53.179 "assigned_rate_limits": { 00:13:53.179 "rw_ios_per_sec": 0, 00:13:53.179 "rw_mbytes_per_sec": 0, 00:13:53.179 "r_mbytes_per_sec": 0, 00:13:53.179 "w_mbytes_per_sec": 0 00:13:53.179 }, 00:13:53.179 "claimed": false, 00:13:53.179 "zoned": false, 00:13:53.179 "supported_io_types": { 00:13:53.179 "read": true, 00:13:53.179 "write": true, 00:13:53.179 "unmap": false, 00:13:53.179 "flush": false, 00:13:53.179 "reset": true, 00:13:53.179 "nvme_admin": false, 00:13:53.179 "nvme_io": false, 00:13:53.179 "nvme_io_md": false, 00:13:53.179 "write_zeroes": true, 00:13:53.179 "zcopy": false, 00:13:53.179 "get_zone_info": false, 00:13:53.179 "zone_management": false, 00:13:53.179 "zone_append": false, 00:13:53.179 "compare": false, 00:13:53.179 "compare_and_write": false, 00:13:53.179 "abort": false, 00:13:53.179 "seek_hole": false, 00:13:53.179 "seek_data": false, 00:13:53.179 "copy": false, 00:13:53.179 "nvme_iov_md": false 00:13:53.179 }, 00:13:53.179 "memory_domains": [ 00:13:53.179 { 00:13:53.179 "dma_device_id": "system", 00:13:53.179 "dma_device_type": 1 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.179 "dma_device_type": 2 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "dma_device_id": "system", 00:13:53.179 "dma_device_type": 1 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.179 "dma_device_type": 2 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "dma_device_id": "system", 00:13:53.179 "dma_device_type": 1 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.179 "dma_device_type": 2 00:13:53.179 } 00:13:53.179 ], 00:13:53.179 "driver_specific": { 00:13:53.179 "raid": { 00:13:53.179 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:53.179 "strip_size_kb": 0, 00:13:53.179 "state": "online", 00:13:53.179 "raid_level": "raid1", 00:13:53.179 "superblock": true, 00:13:53.179 "num_base_bdevs": 3, 00:13:53.179 "num_base_bdevs_discovered": 3, 00:13:53.179 "num_base_bdevs_operational": 3, 00:13:53.179 "base_bdevs_list": [ 00:13:53.179 { 00:13:53.179 "name": "pt1", 00:13:53.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.179 "is_configured": true, 00:13:53.179 "data_offset": 2048, 00:13:53.179 "data_size": 63488 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "name": "pt2", 00:13:53.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.179 "is_configured": true, 00:13:53.179 "data_offset": 2048, 00:13:53.179 "data_size": 63488 00:13:53.179 }, 00:13:53.179 { 00:13:53.179 "name": "pt3", 00:13:53.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.179 "is_configured": true, 00:13:53.179 "data_offset": 2048, 00:13:53.179 "data_size": 63488 00:13:53.179 } 00:13:53.179 ] 00:13:53.179 } 00:13:53.179 } 00:13:53.179 }' 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:53.179 pt2 00:13:53.179 pt3' 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.179 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:53.487 [2024-11-27 14:13:23.873003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d8ab394-a10f-4915-af9b-27d28cc72ec8 '!=' 4d8ab394-a10f-4915-af9b-27d28cc72ec8 ']' 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.487 [2024-11-27 14:13:23.924705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.487 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.488 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.488 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.488 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.488 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.488 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.746 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.746 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.746 "name": "raid_bdev1", 00:13:53.746 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:53.746 "strip_size_kb": 0, 00:13:53.746 "state": "online", 00:13:53.746 "raid_level": "raid1", 00:13:53.746 "superblock": true, 00:13:53.746 "num_base_bdevs": 3, 00:13:53.746 "num_base_bdevs_discovered": 2, 00:13:53.746 "num_base_bdevs_operational": 2, 00:13:53.746 "base_bdevs_list": [ 00:13:53.746 { 00:13:53.746 "name": null, 00:13:53.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.746 "is_configured": false, 00:13:53.746 "data_offset": 0, 00:13:53.746 "data_size": 63488 00:13:53.746 }, 00:13:53.746 { 00:13:53.746 "name": "pt2", 00:13:53.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.746 "is_configured": true, 00:13:53.746 "data_offset": 2048, 00:13:53.746 "data_size": 63488 00:13:53.746 }, 00:13:53.746 { 00:13:53.746 "name": "pt3", 00:13:53.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.746 "is_configured": true, 00:13:53.746 "data_offset": 2048, 00:13:53.746 "data_size": 63488 00:13:53.746 } 00:13:53.746 ] 00:13:53.746 }' 00:13:53.746 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.746 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.010 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.010 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.010 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.011 [2024-11-27 14:13:24.412776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.011 [2024-11-27 14:13:24.412956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.011 [2024-11-27 14:13:24.413173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.011 [2024-11-27 14:13:24.413361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.011 [2024-11-27 14:13:24.413398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.011 [2024-11-27 14:13:24.488734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.011 [2024-11-27 14:13:24.488816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.011 [2024-11-27 14:13:24.488859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:54.011 [2024-11-27 14:13:24.488880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.011 [2024-11-27 14:13:24.491803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.011 [2024-11-27 14:13:24.491871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.011 [2024-11-27 14:13:24.491975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.011 [2024-11-27 14:13:24.492043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.011 pt2 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.011 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.270 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.270 "name": "raid_bdev1", 00:13:54.270 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:54.270 "strip_size_kb": 0, 00:13:54.270 "state": "configuring", 00:13:54.270 "raid_level": "raid1", 00:13:54.270 "superblock": true, 00:13:54.270 "num_base_bdevs": 3, 00:13:54.270 "num_base_bdevs_discovered": 1, 00:13:54.270 "num_base_bdevs_operational": 2, 00:13:54.270 "base_bdevs_list": [ 00:13:54.270 { 00:13:54.270 "name": null, 00:13:54.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.270 "is_configured": false, 00:13:54.270 "data_offset": 2048, 00:13:54.270 "data_size": 63488 00:13:54.270 }, 00:13:54.270 { 00:13:54.270 "name": "pt2", 00:13:54.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.270 "is_configured": true, 00:13:54.270 "data_offset": 2048, 00:13:54.270 "data_size": 63488 00:13:54.270 }, 00:13:54.270 { 00:13:54.270 "name": null, 00:13:54.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.270 "is_configured": false, 00:13:54.270 "data_offset": 2048, 00:13:54.270 "data_size": 63488 00:13:54.270 } 00:13:54.270 ] 00:13:54.270 }' 00:13:54.270 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.270 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.529 [2024-11-27 14:13:25.020958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:54.529 [2024-11-27 14:13:25.022004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.529 [2024-11-27 14:13:25.022048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:54.529 [2024-11-27 14:13:25.022067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.529 [2024-11-27 14:13:25.022680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.529 [2024-11-27 14:13:25.022716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:54.529 [2024-11-27 14:13:25.022880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:54.529 [2024-11-27 14:13:25.022923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:54.529 [2024-11-27 14:13:25.023071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:54.529 [2024-11-27 14:13:25.023099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.529 [2024-11-27 14:13:25.023422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.529 [2024-11-27 14:13:25.023619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:54.529 [2024-11-27 14:13:25.023641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:54.529 [2024-11-27 14:13:25.023814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.529 pt3 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.529 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.787 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.787 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.787 "name": "raid_bdev1", 00:13:54.787 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:54.787 "strip_size_kb": 0, 00:13:54.787 "state": "online", 00:13:54.787 "raid_level": "raid1", 00:13:54.787 "superblock": true, 00:13:54.787 "num_base_bdevs": 3, 00:13:54.787 "num_base_bdevs_discovered": 2, 00:13:54.787 "num_base_bdevs_operational": 2, 00:13:54.787 "base_bdevs_list": [ 00:13:54.788 { 00:13:54.788 "name": null, 00:13:54.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.788 "is_configured": false, 00:13:54.788 "data_offset": 2048, 00:13:54.788 "data_size": 63488 00:13:54.788 }, 00:13:54.788 { 00:13:54.788 "name": "pt2", 00:13:54.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.788 "is_configured": true, 00:13:54.788 "data_offset": 2048, 00:13:54.788 "data_size": 63488 00:13:54.788 }, 00:13:54.788 { 00:13:54.788 "name": "pt3", 00:13:54.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.788 "is_configured": true, 00:13:54.788 "data_offset": 2048, 00:13:54.788 "data_size": 63488 00:13:54.788 } 00:13:54.788 ] 00:13:54.788 }' 00:13:54.788 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.788 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.355 [2024-11-27 14:13:25.589054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.355 [2024-11-27 14:13:25.589230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.355 [2024-11-27 14:13:25.589444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.355 [2024-11-27 14:13:25.589656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.355 [2024-11-27 14:13:25.589828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.355 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.356 [2024-11-27 14:13:25.661081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:55.356 [2024-11-27 14:13:25.661156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.356 [2024-11-27 14:13:25.661186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:55.356 [2024-11-27 14:13:25.661201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.356 [2024-11-27 14:13:25.664078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.356 [2024-11-27 14:13:25.664123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:55.356 [2024-11-27 14:13:25.664229] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:55.356 [2024-11-27 14:13:25.664292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:55.356 [2024-11-27 14:13:25.664458] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:55.356 [2024-11-27 14:13:25.664476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.356 [2024-11-27 14:13:25.664499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:55.356 [2024-11-27 14:13:25.664579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:55.356 pt1 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.356 "name": "raid_bdev1", 00:13:55.356 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:55.356 "strip_size_kb": 0, 00:13:55.356 "state": "configuring", 00:13:55.356 "raid_level": "raid1", 00:13:55.356 "superblock": true, 00:13:55.356 "num_base_bdevs": 3, 00:13:55.356 "num_base_bdevs_discovered": 1, 00:13:55.356 "num_base_bdevs_operational": 2, 00:13:55.356 "base_bdevs_list": [ 00:13:55.356 { 00:13:55.356 "name": null, 00:13:55.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.356 "is_configured": false, 00:13:55.356 "data_offset": 2048, 00:13:55.356 "data_size": 63488 00:13:55.356 }, 00:13:55.356 { 00:13:55.356 "name": "pt2", 00:13:55.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.356 "is_configured": true, 00:13:55.356 "data_offset": 2048, 00:13:55.356 "data_size": 63488 00:13:55.356 }, 00:13:55.356 { 00:13:55.356 "name": null, 00:13:55.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.356 "is_configured": false, 00:13:55.356 "data_offset": 2048, 00:13:55.356 "data_size": 63488 00:13:55.356 } 00:13:55.356 ] 00:13:55.356 }' 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.356 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.923 [2024-11-27 14:13:26.253276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:55.923 [2024-11-27 14:13:26.253510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.923 [2024-11-27 14:13:26.253556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:55.923 [2024-11-27 14:13:26.253573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.923 [2024-11-27 14:13:26.254222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.923 [2024-11-27 14:13:26.254256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:55.923 [2024-11-27 14:13:26.254366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:55.923 [2024-11-27 14:13:26.254399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:55.923 [2024-11-27 14:13:26.254551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:55.923 [2024-11-27 14:13:26.254567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:55.923 [2024-11-27 14:13:26.254898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:55.923 [2024-11-27 14:13:26.255100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:55.923 [2024-11-27 14:13:26.255125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:55.923 [2024-11-27 14:13:26.255291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.923 pt3 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.923 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.924 "name": "raid_bdev1", 00:13:55.924 "uuid": "4d8ab394-a10f-4915-af9b-27d28cc72ec8", 00:13:55.924 "strip_size_kb": 0, 00:13:55.924 "state": "online", 00:13:55.924 "raid_level": "raid1", 00:13:55.924 "superblock": true, 00:13:55.924 "num_base_bdevs": 3, 00:13:55.924 "num_base_bdevs_discovered": 2, 00:13:55.924 "num_base_bdevs_operational": 2, 00:13:55.924 "base_bdevs_list": [ 00:13:55.924 { 00:13:55.924 "name": null, 00:13:55.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.924 "is_configured": false, 00:13:55.924 "data_offset": 2048, 00:13:55.924 "data_size": 63488 00:13:55.924 }, 00:13:55.924 { 00:13:55.924 "name": "pt2", 00:13:55.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.924 "is_configured": true, 00:13:55.924 "data_offset": 2048, 00:13:55.924 "data_size": 63488 00:13:55.924 }, 00:13:55.924 { 00:13:55.924 "name": "pt3", 00:13:55.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.924 "is_configured": true, 00:13:55.924 "data_offset": 2048, 00:13:55.924 "data_size": 63488 00:13:55.924 } 00:13:55.924 ] 00:13:55.924 }' 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.924 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.492 [2024-11-27 14:13:26.837737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4d8ab394-a10f-4915-af9b-27d28cc72ec8 '!=' 4d8ab394-a10f-4915-af9b-27d28cc72ec8 ']' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68885 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68885 ']' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68885 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68885 00:13:56.492 killing process with pid 68885 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68885' 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68885 00:13:56.492 [2024-11-27 14:13:26.923058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.492 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68885 00:13:56.492 [2024-11-27 14:13:26.923179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.492 [2024-11-27 14:13:26.923261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.492 [2024-11-27 14:13:26.923282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:56.751 [2024-11-27 14:13:27.195900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.127 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:58.127 00:13:58.127 real 0m8.805s 00:13:58.127 user 0m14.267s 00:13:58.127 sys 0m1.272s 00:13:58.127 ************************************ 00:13:58.127 END TEST raid_superblock_test 00:13:58.127 ************************************ 00:13:58.127 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.127 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.127 14:13:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:58.127 14:13:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:58.127 14:13:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.127 14:13:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.127 ************************************ 00:13:58.127 START TEST raid_read_error_test 00:13:58.127 ************************************ 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qcoslpEjyT 00:13:58.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69342 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69342 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69342 ']' 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.127 14:13:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.127 [2024-11-27 14:13:28.533303] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:13:58.127 [2024-11-27 14:13:28.533649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69342 ] 00:13:58.386 [2024-11-27 14:13:28.711592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.386 [2024-11-27 14:13:28.858522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.645 [2024-11-27 14:13:29.091334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.645 [2024-11-27 14:13:29.091448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 BaseBdev1_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 true 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 [2024-11-27 14:13:29.551445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:59.210 [2024-11-27 14:13:29.551565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.210 [2024-11-27 14:13:29.551604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:59.210 [2024-11-27 14:13:29.551627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.210 [2024-11-27 14:13:29.554853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.210 [2024-11-27 14:13:29.554909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.210 BaseBdev1 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 BaseBdev2_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 true 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 [2024-11-27 14:13:29.615729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:59.210 [2024-11-27 14:13:29.615854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.210 [2024-11-27 14:13:29.615888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:59.210 [2024-11-27 14:13:29.615910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.210 [2024-11-27 14:13:29.618948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.210 [2024-11-27 14:13:29.619006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.210 BaseBdev2 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 BaseBdev3_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 true 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 [2024-11-27 14:13:29.695952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:59.210 [2024-11-27 14:13:29.696344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.210 [2024-11-27 14:13:29.696389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:59.210 [2024-11-27 14:13:29.696414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.210 [2024-11-27 14:13:29.699463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.210 [2024-11-27 14:13:29.699654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.210 BaseBdev3 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.210 [2024-11-27 14:13:29.704088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.210 [2024-11-27 14:13:29.706730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.210 [2024-11-27 14:13:29.706875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.210 [2024-11-27 14:13:29.707200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:59.210 [2024-11-27 14:13:29.707233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:59.210 [2024-11-27 14:13:29.707567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:59.210 [2024-11-27 14:13:29.707851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:59.210 [2024-11-27 14:13:29.707876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:59.210 [2024-11-27 14:13:29.708136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.210 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.468 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.468 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.468 "name": "raid_bdev1", 00:13:59.468 "uuid": "5e66ad22-d887-447f-bdeb-89cb27ea8684", 00:13:59.468 "strip_size_kb": 0, 00:13:59.468 "state": "online", 00:13:59.468 "raid_level": "raid1", 00:13:59.468 "superblock": true, 00:13:59.468 "num_base_bdevs": 3, 00:13:59.468 "num_base_bdevs_discovered": 3, 00:13:59.468 "num_base_bdevs_operational": 3, 00:13:59.468 "base_bdevs_list": [ 00:13:59.468 { 00:13:59.469 "name": "BaseBdev1", 00:13:59.469 "uuid": "b8494b3c-b8ec-5841-a9b7-8005e1ff205a", 00:13:59.469 "is_configured": true, 00:13:59.469 "data_offset": 2048, 00:13:59.469 "data_size": 63488 00:13:59.469 }, 00:13:59.469 { 00:13:59.469 "name": "BaseBdev2", 00:13:59.469 "uuid": "5c0691d7-7487-5c28-ab0e-70fe81f4e523", 00:13:59.469 "is_configured": true, 00:13:59.469 "data_offset": 2048, 00:13:59.469 "data_size": 63488 00:13:59.469 }, 00:13:59.469 { 00:13:59.469 "name": "BaseBdev3", 00:13:59.469 "uuid": "71165950-72ae-5d55-95bf-5a23769901c0", 00:13:59.469 "is_configured": true, 00:13:59.469 "data_offset": 2048, 00:13:59.469 "data_size": 63488 00:13:59.469 } 00:13:59.469 ] 00:13:59.469 }' 00:13:59.469 14:13:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.469 14:13:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.036 14:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:00.036 14:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:00.036 [2024-11-27 14:13:30.374060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.972 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.973 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.973 "name": "raid_bdev1", 00:14:00.973 "uuid": "5e66ad22-d887-447f-bdeb-89cb27ea8684", 00:14:00.973 "strip_size_kb": 0, 00:14:00.973 "state": "online", 00:14:00.973 "raid_level": "raid1", 00:14:00.973 "superblock": true, 00:14:00.973 "num_base_bdevs": 3, 00:14:00.973 "num_base_bdevs_discovered": 3, 00:14:00.973 "num_base_bdevs_operational": 3, 00:14:00.973 "base_bdevs_list": [ 00:14:00.973 { 00:14:00.973 "name": "BaseBdev1", 00:14:00.973 "uuid": "b8494b3c-b8ec-5841-a9b7-8005e1ff205a", 00:14:00.973 "is_configured": true, 00:14:00.973 "data_offset": 2048, 00:14:00.973 "data_size": 63488 00:14:00.973 }, 00:14:00.973 { 00:14:00.973 "name": "BaseBdev2", 00:14:00.973 "uuid": "5c0691d7-7487-5c28-ab0e-70fe81f4e523", 00:14:00.973 "is_configured": true, 00:14:00.973 "data_offset": 2048, 00:14:00.973 "data_size": 63488 00:14:00.973 }, 00:14:00.973 { 00:14:00.973 "name": "BaseBdev3", 00:14:00.973 "uuid": "71165950-72ae-5d55-95bf-5a23769901c0", 00:14:00.973 "is_configured": true, 00:14:00.973 "data_offset": 2048, 00:14:00.973 "data_size": 63488 00:14:00.973 } 00:14:00.973 ] 00:14:00.973 }' 00:14:00.973 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.973 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.550 [2024-11-27 14:13:31.775867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.550 [2024-11-27 14:13:31.776359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.550 [2024-11-27 14:13:31.780076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.550 [2024-11-27 14:13:31.780406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.550 { 00:14:01.550 "results": [ 00:14:01.550 { 00:14:01.550 "job": "raid_bdev1", 00:14:01.550 "core_mask": "0x1", 00:14:01.550 "workload": "randrw", 00:14:01.550 "percentage": 50, 00:14:01.550 "status": "finished", 00:14:01.550 "queue_depth": 1, 00:14:01.550 "io_size": 131072, 00:14:01.550 "runtime": 1.399266, 00:14:01.550 "iops": 7827.675366942382, 00:14:01.550 "mibps": 978.4594208677978, 00:14:01.550 "io_failed": 0, 00:14:01.550 "io_timeout": 0, 00:14:01.550 "avg_latency_us": 122.70248516388203, 00:14:01.550 "min_latency_us": 45.847272727272724, 00:14:01.550 "max_latency_us": 1980.9745454545455 00:14:01.550 } 00:14:01.550 ], 00:14:01.550 "core_count": 1 00:14:01.550 } 00:14:01.550 [2024-11-27 14:13:31.780726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.550 [2024-11-27 14:13:31.780759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69342 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69342 ']' 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69342 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69342 00:14:01.550 killing process with pid 69342 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69342' 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69342 00:14:01.550 [2024-11-27 14:13:31.819304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.550 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69342 00:14:01.550 [2024-11-27 14:13:32.046017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qcoslpEjyT 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:02.954 00:14:02.954 real 0m4.874s 00:14:02.954 user 0m5.918s 00:14:02.954 sys 0m0.665s 00:14:02.954 ************************************ 00:14:02.954 END TEST raid_read_error_test 00:14:02.954 ************************************ 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.954 14:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.954 14:13:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:02.954 14:13:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:02.954 14:13:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.954 14:13:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.954 ************************************ 00:14:02.954 START TEST raid_write_error_test 00:14:02.954 ************************************ 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:02.954 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.B1fNrAkZ1i 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69493 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69493 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69493 ']' 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.955 14:13:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.955 [2024-11-27 14:13:33.455254] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:02.955 [2024-11-27 14:13:33.455435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69493 ] 00:14:03.213 [2024-11-27 14:13:33.645553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.472 [2024-11-27 14:13:33.812269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.730 [2024-11-27 14:13:34.065550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.730 [2024-11-27 14:13:34.065593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 BaseBdev1_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 true 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 [2024-11-27 14:13:34.574602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:04.298 [2024-11-27 14:13:34.574835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.298 [2024-11-27 14:13:34.574996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:04.298 [2024-11-27 14:13:34.575133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.298 [2024-11-27 14:13:34.578012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.298 [2024-11-27 14:13:34.578283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.298 BaseBdev1 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 BaseBdev2_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 true 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 [2024-11-27 14:13:34.637186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:04.298 [2024-11-27 14:13:34.637304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.298 [2024-11-27 14:13:34.637331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:04.298 [2024-11-27 14:13:34.637348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.298 [2024-11-27 14:13:34.640151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.298 [2024-11-27 14:13:34.640333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:04.298 BaseBdev2 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 BaseBdev3_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 true 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 [2024-11-27 14:13:34.707594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:04.298 [2024-11-27 14:13:34.707716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.298 [2024-11-27 14:13:34.707760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:04.298 [2024-11-27 14:13:34.707793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.298 [2024-11-27 14:13:34.711846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.298 [2024-11-27 14:13:34.712126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:04.298 BaseBdev3 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.298 [2024-11-27 14:13:34.720412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.298 [2024-11-27 14:13:34.722921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.298 [2024-11-27 14:13:34.723167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.298 [2024-11-27 14:13:34.723595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:04.298 [2024-11-27 14:13:34.723651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.298 [2024-11-27 14:13:34.724157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:04.298 [2024-11-27 14:13:34.724534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:04.298 [2024-11-27 14:13:34.724568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:04.298 [2024-11-27 14:13:34.724971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.298 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.299 "name": "raid_bdev1", 00:14:04.299 "uuid": "878e357f-79de-41fe-9e72-24e8a4a5791f", 00:14:04.299 "strip_size_kb": 0, 00:14:04.299 "state": "online", 00:14:04.299 "raid_level": "raid1", 00:14:04.299 "superblock": true, 00:14:04.299 "num_base_bdevs": 3, 00:14:04.299 "num_base_bdevs_discovered": 3, 00:14:04.299 "num_base_bdevs_operational": 3, 00:14:04.299 "base_bdevs_list": [ 00:14:04.299 { 00:14:04.299 "name": "BaseBdev1", 00:14:04.299 "uuid": "895379a0-7037-505b-b1b8-1abd917258c8", 00:14:04.299 "is_configured": true, 00:14:04.299 "data_offset": 2048, 00:14:04.299 "data_size": 63488 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "BaseBdev2", 00:14:04.299 "uuid": "0770b9f4-715c-57f4-81df-983110677a98", 00:14:04.299 "is_configured": true, 00:14:04.299 "data_offset": 2048, 00:14:04.299 "data_size": 63488 00:14:04.299 }, 00:14:04.299 { 00:14:04.299 "name": "BaseBdev3", 00:14:04.299 "uuid": "86c3bf8a-ca26-54b6-a0ee-631ab5ec89eb", 00:14:04.299 "is_configured": true, 00:14:04.299 "data_offset": 2048, 00:14:04.299 "data_size": 63488 00:14:04.299 } 00:14:04.299 ] 00:14:04.299 }' 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.299 14:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.865 14:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:04.865 14:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.123 [2024-11-27 14:13:35.378604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.057 [2024-11-27 14:13:36.257865] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:06.057 [2024-11-27 14:13:36.257926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.057 [2024-11-27 14:13:36.258191] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.057 "name": "raid_bdev1", 00:14:06.057 "uuid": "878e357f-79de-41fe-9e72-24e8a4a5791f", 00:14:06.057 "strip_size_kb": 0, 00:14:06.057 "state": "online", 00:14:06.057 "raid_level": "raid1", 00:14:06.057 "superblock": true, 00:14:06.057 "num_base_bdevs": 3, 00:14:06.057 "num_base_bdevs_discovered": 2, 00:14:06.057 "num_base_bdevs_operational": 2, 00:14:06.057 "base_bdevs_list": [ 00:14:06.057 { 00:14:06.057 "name": null, 00:14:06.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.057 "is_configured": false, 00:14:06.057 "data_offset": 0, 00:14:06.057 "data_size": 63488 00:14:06.057 }, 00:14:06.057 { 00:14:06.057 "name": "BaseBdev2", 00:14:06.057 "uuid": "0770b9f4-715c-57f4-81df-983110677a98", 00:14:06.057 "is_configured": true, 00:14:06.057 "data_offset": 2048, 00:14:06.057 "data_size": 63488 00:14:06.057 }, 00:14:06.057 { 00:14:06.057 "name": "BaseBdev3", 00:14:06.057 "uuid": "86c3bf8a-ca26-54b6-a0ee-631ab5ec89eb", 00:14:06.057 "is_configured": true, 00:14:06.057 "data_offset": 2048, 00:14:06.057 "data_size": 63488 00:14:06.057 } 00:14:06.057 ] 00:14:06.057 }' 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.057 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.317 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.317 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.317 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.318 [2024-11-27 14:13:36.828750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.576 [2024-11-27 14:13:36.828975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.576 [2024-11-27 14:13:36.832456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.576 { 00:14:06.576 "results": [ 00:14:06.576 { 00:14:06.576 "job": "raid_bdev1", 00:14:06.576 "core_mask": "0x1", 00:14:06.576 "workload": "randrw", 00:14:06.576 "percentage": 50, 00:14:06.576 "status": "finished", 00:14:06.576 "queue_depth": 1, 00:14:06.576 "io_size": 131072, 00:14:06.576 "runtime": 1.447417, 00:14:06.576 "iops": 9880.359288304615, 00:14:06.576 "mibps": 1235.0449110380769, 00:14:06.576 "io_failed": 0, 00:14:06.576 "io_timeout": 0, 00:14:06.576 "avg_latency_us": 96.4744781992359, 00:14:06.576 "min_latency_us": 42.589090909090906, 00:14:06.576 "max_latency_us": 1817.1345454545456 00:14:06.576 } 00:14:06.576 ], 00:14:06.576 "core_count": 1 00:14:06.576 } 00:14:06.576 [2024-11-27 14:13:36.832691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.576 [2024-11-27 14:13:36.832890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.576 [2024-11-27 14:13:36.832922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69493 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69493 ']' 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69493 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69493 00:14:06.576 killing process with pid 69493 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69493' 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69493 00:14:06.576 [2024-11-27 14:13:36.874546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.576 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69493 00:14:06.576 [2024-11-27 14:13:37.080513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.B1fNrAkZ1i 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:07.951 ************************************ 00:14:07.951 END TEST raid_write_error_test 00:14:07.951 ************************************ 00:14:07.951 00:14:07.951 real 0m4.957s 00:14:07.951 user 0m6.164s 00:14:07.951 sys 0m0.631s 00:14:07.951 14:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.952 14:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.952 14:13:38 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:07.952 14:13:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:07.952 14:13:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:07.952 14:13:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:07.952 14:13:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.952 14:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.952 ************************************ 00:14:07.952 START TEST raid_state_function_test 00:14:07.952 ************************************ 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.952 Process raid pid: 69637 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69637 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69637' 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69637 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69637 ']' 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.952 14:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.210 [2024-11-27 14:13:38.469025] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:08.210 [2024-11-27 14:13:38.469423] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.210 [2024-11-27 14:13:38.651109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.468 [2024-11-27 14:13:38.797310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.727 [2024-11-27 14:13:39.029031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.727 [2024-11-27 14:13:39.029382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.986 [2024-11-27 14:13:39.460594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.986 [2024-11-27 14:13:39.460711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.986 [2024-11-27 14:13:39.460743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.986 [2024-11-27 14:13:39.460763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.986 [2024-11-27 14:13:39.460775] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.986 [2024-11-27 14:13:39.460793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.986 [2024-11-27 14:13:39.460805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.986 [2024-11-27 14:13:39.460844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.986 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.244 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.244 "name": "Existed_Raid", 00:14:09.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.244 "strip_size_kb": 64, 00:14:09.244 "state": "configuring", 00:14:09.244 "raid_level": "raid0", 00:14:09.244 "superblock": false, 00:14:09.244 "num_base_bdevs": 4, 00:14:09.244 "num_base_bdevs_discovered": 0, 00:14:09.244 "num_base_bdevs_operational": 4, 00:14:09.244 "base_bdevs_list": [ 00:14:09.244 { 00:14:09.244 "name": "BaseBdev1", 00:14:09.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.244 "is_configured": false, 00:14:09.244 "data_offset": 0, 00:14:09.244 "data_size": 0 00:14:09.244 }, 00:14:09.244 { 00:14:09.244 "name": "BaseBdev2", 00:14:09.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.244 "is_configured": false, 00:14:09.244 "data_offset": 0, 00:14:09.244 "data_size": 0 00:14:09.244 }, 00:14:09.244 { 00:14:09.244 "name": "BaseBdev3", 00:14:09.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.244 "is_configured": false, 00:14:09.244 "data_offset": 0, 00:14:09.244 "data_size": 0 00:14:09.244 }, 00:14:09.244 { 00:14:09.244 "name": "BaseBdev4", 00:14:09.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.244 "is_configured": false, 00:14:09.244 "data_offset": 0, 00:14:09.244 "data_size": 0 00:14:09.244 } 00:14:09.244 ] 00:14:09.244 }' 00:14:09.244 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.244 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.503 [2024-11-27 14:13:39.980768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.503 [2024-11-27 14:13:39.980880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.503 [2024-11-27 14:13:39.988693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.503 [2024-11-27 14:13:39.988764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.503 [2024-11-27 14:13:39.988785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.503 [2024-11-27 14:13:39.988805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.503 [2024-11-27 14:13:39.988831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.503 [2024-11-27 14:13:39.988855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.503 [2024-11-27 14:13:39.988868] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:09.503 [2024-11-27 14:13:39.988886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.503 14:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.761 [2024-11-27 14:13:40.039688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.761 BaseBdev1 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.761 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 [ 00:14:09.762 { 00:14:09.762 "name": "BaseBdev1", 00:14:09.762 "aliases": [ 00:14:09.762 "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699" 00:14:09.762 ], 00:14:09.762 "product_name": "Malloc disk", 00:14:09.762 "block_size": 512, 00:14:09.762 "num_blocks": 65536, 00:14:09.762 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:09.762 "assigned_rate_limits": { 00:14:09.762 "rw_ios_per_sec": 0, 00:14:09.762 "rw_mbytes_per_sec": 0, 00:14:09.762 "r_mbytes_per_sec": 0, 00:14:09.762 "w_mbytes_per_sec": 0 00:14:09.762 }, 00:14:09.762 "claimed": true, 00:14:09.762 "claim_type": "exclusive_write", 00:14:09.762 "zoned": false, 00:14:09.762 "supported_io_types": { 00:14:09.762 "read": true, 00:14:09.762 "write": true, 00:14:09.762 "unmap": true, 00:14:09.762 "flush": true, 00:14:09.762 "reset": true, 00:14:09.762 "nvme_admin": false, 00:14:09.762 "nvme_io": false, 00:14:09.762 "nvme_io_md": false, 00:14:09.762 "write_zeroes": true, 00:14:09.762 "zcopy": true, 00:14:09.762 "get_zone_info": false, 00:14:09.762 "zone_management": false, 00:14:09.762 "zone_append": false, 00:14:09.762 "compare": false, 00:14:09.762 "compare_and_write": false, 00:14:09.762 "abort": true, 00:14:09.762 "seek_hole": false, 00:14:09.762 "seek_data": false, 00:14:09.762 "copy": true, 00:14:09.762 "nvme_iov_md": false 00:14:09.762 }, 00:14:09.762 "memory_domains": [ 00:14:09.762 { 00:14:09.762 "dma_device_id": "system", 00:14:09.762 "dma_device_type": 1 00:14:09.762 }, 00:14:09.762 { 00:14:09.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.762 "dma_device_type": 2 00:14:09.762 } 00:14:09.762 ], 00:14:09.762 "driver_specific": {} 00:14:09.762 } 00:14:09.762 ] 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.762 "name": "Existed_Raid", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.762 "strip_size_kb": 64, 00:14:09.762 "state": "configuring", 00:14:09.762 "raid_level": "raid0", 00:14:09.762 "superblock": false, 00:14:09.762 "num_base_bdevs": 4, 00:14:09.762 "num_base_bdevs_discovered": 1, 00:14:09.762 "num_base_bdevs_operational": 4, 00:14:09.762 "base_bdevs_list": [ 00:14:09.762 { 00:14:09.762 "name": "BaseBdev1", 00:14:09.762 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:09.762 "is_configured": true, 00:14:09.762 "data_offset": 0, 00:14:09.762 "data_size": 65536 00:14:09.762 }, 00:14:09.762 { 00:14:09.762 "name": "BaseBdev2", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.762 "is_configured": false, 00:14:09.762 "data_offset": 0, 00:14:09.762 "data_size": 0 00:14:09.762 }, 00:14:09.762 { 00:14:09.762 "name": "BaseBdev3", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.762 "is_configured": false, 00:14:09.762 "data_offset": 0, 00:14:09.762 "data_size": 0 00:14:09.762 }, 00:14:09.762 { 00:14:09.762 "name": "BaseBdev4", 00:14:09.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.762 "is_configured": false, 00:14:09.762 "data_offset": 0, 00:14:09.762 "data_size": 0 00:14:09.762 } 00:14:09.762 ] 00:14:09.762 }' 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.762 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.329 [2024-11-27 14:13:40.587965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.329 [2024-11-27 14:13:40.588083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.329 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.329 [2024-11-27 14:13:40.600594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.329 [2024-11-27 14:13:40.603495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.329 [2024-11-27 14:13:40.603696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.329 [2024-11-27 14:13:40.603887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.330 [2024-11-27 14:13:40.603969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.330 [2024-11-27 14:13:40.603993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.330 [2024-11-27 14:13:40.604013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.330 "name": "Existed_Raid", 00:14:10.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.330 "strip_size_kb": 64, 00:14:10.330 "state": "configuring", 00:14:10.330 "raid_level": "raid0", 00:14:10.330 "superblock": false, 00:14:10.330 "num_base_bdevs": 4, 00:14:10.330 "num_base_bdevs_discovered": 1, 00:14:10.330 "num_base_bdevs_operational": 4, 00:14:10.330 "base_bdevs_list": [ 00:14:10.330 { 00:14:10.330 "name": "BaseBdev1", 00:14:10.330 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:10.330 "is_configured": true, 00:14:10.330 "data_offset": 0, 00:14:10.330 "data_size": 65536 00:14:10.330 }, 00:14:10.330 { 00:14:10.330 "name": "BaseBdev2", 00:14:10.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.330 "is_configured": false, 00:14:10.330 "data_offset": 0, 00:14:10.330 "data_size": 0 00:14:10.330 }, 00:14:10.330 { 00:14:10.330 "name": "BaseBdev3", 00:14:10.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.330 "is_configured": false, 00:14:10.330 "data_offset": 0, 00:14:10.330 "data_size": 0 00:14:10.330 }, 00:14:10.330 { 00:14:10.330 "name": "BaseBdev4", 00:14:10.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.330 "is_configured": false, 00:14:10.330 "data_offset": 0, 00:14:10.330 "data_size": 0 00:14:10.330 } 00:14:10.330 ] 00:14:10.330 }' 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.330 14:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.897 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.897 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.897 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.898 [2024-11-27 14:13:41.187186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.898 BaseBdev2 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.898 [ 00:14:10.898 { 00:14:10.898 "name": "BaseBdev2", 00:14:10.898 "aliases": [ 00:14:10.898 "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3" 00:14:10.898 ], 00:14:10.898 "product_name": "Malloc disk", 00:14:10.898 "block_size": 512, 00:14:10.898 "num_blocks": 65536, 00:14:10.898 "uuid": "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3", 00:14:10.898 "assigned_rate_limits": { 00:14:10.898 "rw_ios_per_sec": 0, 00:14:10.898 "rw_mbytes_per_sec": 0, 00:14:10.898 "r_mbytes_per_sec": 0, 00:14:10.898 "w_mbytes_per_sec": 0 00:14:10.898 }, 00:14:10.898 "claimed": true, 00:14:10.898 "claim_type": "exclusive_write", 00:14:10.898 "zoned": false, 00:14:10.898 "supported_io_types": { 00:14:10.898 "read": true, 00:14:10.898 "write": true, 00:14:10.898 "unmap": true, 00:14:10.898 "flush": true, 00:14:10.898 "reset": true, 00:14:10.898 "nvme_admin": false, 00:14:10.898 "nvme_io": false, 00:14:10.898 "nvme_io_md": false, 00:14:10.898 "write_zeroes": true, 00:14:10.898 "zcopy": true, 00:14:10.898 "get_zone_info": false, 00:14:10.898 "zone_management": false, 00:14:10.898 "zone_append": false, 00:14:10.898 "compare": false, 00:14:10.898 "compare_and_write": false, 00:14:10.898 "abort": true, 00:14:10.898 "seek_hole": false, 00:14:10.898 "seek_data": false, 00:14:10.898 "copy": true, 00:14:10.898 "nvme_iov_md": false 00:14:10.898 }, 00:14:10.898 "memory_domains": [ 00:14:10.898 { 00:14:10.898 "dma_device_id": "system", 00:14:10.898 "dma_device_type": 1 00:14:10.898 }, 00:14:10.898 { 00:14:10.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.898 "dma_device_type": 2 00:14:10.898 } 00:14:10.898 ], 00:14:10.898 "driver_specific": {} 00:14:10.898 } 00:14:10.898 ] 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.898 "name": "Existed_Raid", 00:14:10.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.898 "strip_size_kb": 64, 00:14:10.898 "state": "configuring", 00:14:10.898 "raid_level": "raid0", 00:14:10.898 "superblock": false, 00:14:10.898 "num_base_bdevs": 4, 00:14:10.898 "num_base_bdevs_discovered": 2, 00:14:10.898 "num_base_bdevs_operational": 4, 00:14:10.898 "base_bdevs_list": [ 00:14:10.898 { 00:14:10.898 "name": "BaseBdev1", 00:14:10.898 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:10.898 "is_configured": true, 00:14:10.898 "data_offset": 0, 00:14:10.898 "data_size": 65536 00:14:10.898 }, 00:14:10.898 { 00:14:10.898 "name": "BaseBdev2", 00:14:10.898 "uuid": "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3", 00:14:10.898 "is_configured": true, 00:14:10.898 "data_offset": 0, 00:14:10.898 "data_size": 65536 00:14:10.898 }, 00:14:10.898 { 00:14:10.898 "name": "BaseBdev3", 00:14:10.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.898 "is_configured": false, 00:14:10.898 "data_offset": 0, 00:14:10.898 "data_size": 0 00:14:10.898 }, 00:14:10.898 { 00:14:10.898 "name": "BaseBdev4", 00:14:10.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.898 "is_configured": false, 00:14:10.898 "data_offset": 0, 00:14:10.898 "data_size": 0 00:14:10.898 } 00:14:10.898 ] 00:14:10.898 }' 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.898 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.466 [2024-11-27 14:13:41.825480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.466 BaseBdev3 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.466 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.466 [ 00:14:11.466 { 00:14:11.466 "name": "BaseBdev3", 00:14:11.466 "aliases": [ 00:14:11.466 "838b0388-aa7c-451a-b7c9-3604e7cb8387" 00:14:11.466 ], 00:14:11.466 "product_name": "Malloc disk", 00:14:11.466 "block_size": 512, 00:14:11.466 "num_blocks": 65536, 00:14:11.466 "uuid": "838b0388-aa7c-451a-b7c9-3604e7cb8387", 00:14:11.466 "assigned_rate_limits": { 00:14:11.466 "rw_ios_per_sec": 0, 00:14:11.466 "rw_mbytes_per_sec": 0, 00:14:11.466 "r_mbytes_per_sec": 0, 00:14:11.466 "w_mbytes_per_sec": 0 00:14:11.466 }, 00:14:11.466 "claimed": true, 00:14:11.466 "claim_type": "exclusive_write", 00:14:11.466 "zoned": false, 00:14:11.466 "supported_io_types": { 00:14:11.466 "read": true, 00:14:11.466 "write": true, 00:14:11.466 "unmap": true, 00:14:11.466 "flush": true, 00:14:11.466 "reset": true, 00:14:11.466 "nvme_admin": false, 00:14:11.466 "nvme_io": false, 00:14:11.466 "nvme_io_md": false, 00:14:11.466 "write_zeroes": true, 00:14:11.466 "zcopy": true, 00:14:11.466 "get_zone_info": false, 00:14:11.466 "zone_management": false, 00:14:11.466 "zone_append": false, 00:14:11.466 "compare": false, 00:14:11.466 "compare_and_write": false, 00:14:11.466 "abort": true, 00:14:11.467 "seek_hole": false, 00:14:11.467 "seek_data": false, 00:14:11.467 "copy": true, 00:14:11.467 "nvme_iov_md": false 00:14:11.467 }, 00:14:11.467 "memory_domains": [ 00:14:11.467 { 00:14:11.467 "dma_device_id": "system", 00:14:11.467 "dma_device_type": 1 00:14:11.467 }, 00:14:11.467 { 00:14:11.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.467 "dma_device_type": 2 00:14:11.467 } 00:14:11.467 ], 00:14:11.467 "driver_specific": {} 00:14:11.467 } 00:14:11.467 ] 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.467 "name": "Existed_Raid", 00:14:11.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.467 "strip_size_kb": 64, 00:14:11.467 "state": "configuring", 00:14:11.467 "raid_level": "raid0", 00:14:11.467 "superblock": false, 00:14:11.467 "num_base_bdevs": 4, 00:14:11.467 "num_base_bdevs_discovered": 3, 00:14:11.467 "num_base_bdevs_operational": 4, 00:14:11.467 "base_bdevs_list": [ 00:14:11.467 { 00:14:11.467 "name": "BaseBdev1", 00:14:11.467 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:11.467 "is_configured": true, 00:14:11.467 "data_offset": 0, 00:14:11.467 "data_size": 65536 00:14:11.467 }, 00:14:11.467 { 00:14:11.467 "name": "BaseBdev2", 00:14:11.467 "uuid": "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3", 00:14:11.467 "is_configured": true, 00:14:11.467 "data_offset": 0, 00:14:11.467 "data_size": 65536 00:14:11.467 }, 00:14:11.467 { 00:14:11.467 "name": "BaseBdev3", 00:14:11.467 "uuid": "838b0388-aa7c-451a-b7c9-3604e7cb8387", 00:14:11.467 "is_configured": true, 00:14:11.467 "data_offset": 0, 00:14:11.467 "data_size": 65536 00:14:11.467 }, 00:14:11.467 { 00:14:11.467 "name": "BaseBdev4", 00:14:11.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.467 "is_configured": false, 00:14:11.467 "data_offset": 0, 00:14:11.467 "data_size": 0 00:14:11.467 } 00:14:11.467 ] 00:14:11.467 }' 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.467 14:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.034 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:12.034 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.034 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.034 [2024-11-27 14:13:42.440735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.034 [2024-11-27 14:13:42.440883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:12.034 [2024-11-27 14:13:42.440906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:12.034 [2024-11-27 14:13:42.441323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:12.034 [2024-11-27 14:13:42.441601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:12.034 [2024-11-27 14:13:42.441629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:12.034 [2024-11-27 14:13:42.442067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.034 BaseBdev4 00:14:12.034 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.034 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:12.034 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.035 [ 00:14:12.035 { 00:14:12.035 "name": "BaseBdev4", 00:14:12.035 "aliases": [ 00:14:12.035 "5101297e-9601-4096-823c-6910cde8d962" 00:14:12.035 ], 00:14:12.035 "product_name": "Malloc disk", 00:14:12.035 "block_size": 512, 00:14:12.035 "num_blocks": 65536, 00:14:12.035 "uuid": "5101297e-9601-4096-823c-6910cde8d962", 00:14:12.035 "assigned_rate_limits": { 00:14:12.035 "rw_ios_per_sec": 0, 00:14:12.035 "rw_mbytes_per_sec": 0, 00:14:12.035 "r_mbytes_per_sec": 0, 00:14:12.035 "w_mbytes_per_sec": 0 00:14:12.035 }, 00:14:12.035 "claimed": true, 00:14:12.035 "claim_type": "exclusive_write", 00:14:12.035 "zoned": false, 00:14:12.035 "supported_io_types": { 00:14:12.035 "read": true, 00:14:12.035 "write": true, 00:14:12.035 "unmap": true, 00:14:12.035 "flush": true, 00:14:12.035 "reset": true, 00:14:12.035 "nvme_admin": false, 00:14:12.035 "nvme_io": false, 00:14:12.035 "nvme_io_md": false, 00:14:12.035 "write_zeroes": true, 00:14:12.035 "zcopy": true, 00:14:12.035 "get_zone_info": false, 00:14:12.035 "zone_management": false, 00:14:12.035 "zone_append": false, 00:14:12.035 "compare": false, 00:14:12.035 "compare_and_write": false, 00:14:12.035 "abort": true, 00:14:12.035 "seek_hole": false, 00:14:12.035 "seek_data": false, 00:14:12.035 "copy": true, 00:14:12.035 "nvme_iov_md": false 00:14:12.035 }, 00:14:12.035 "memory_domains": [ 00:14:12.035 { 00:14:12.035 "dma_device_id": "system", 00:14:12.035 "dma_device_type": 1 00:14:12.035 }, 00:14:12.035 { 00:14:12.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.035 "dma_device_type": 2 00:14:12.035 } 00:14:12.035 ], 00:14:12.035 "driver_specific": {} 00:14:12.035 } 00:14:12.035 ] 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.035 "name": "Existed_Raid", 00:14:12.035 "uuid": "62e8bd17-4933-4756-964d-42b2ddabda87", 00:14:12.035 "strip_size_kb": 64, 00:14:12.035 "state": "online", 00:14:12.035 "raid_level": "raid0", 00:14:12.035 "superblock": false, 00:14:12.035 "num_base_bdevs": 4, 00:14:12.035 "num_base_bdevs_discovered": 4, 00:14:12.035 "num_base_bdevs_operational": 4, 00:14:12.035 "base_bdevs_list": [ 00:14:12.035 { 00:14:12.035 "name": "BaseBdev1", 00:14:12.035 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:12.035 "is_configured": true, 00:14:12.035 "data_offset": 0, 00:14:12.035 "data_size": 65536 00:14:12.035 }, 00:14:12.035 { 00:14:12.035 "name": "BaseBdev2", 00:14:12.035 "uuid": "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3", 00:14:12.035 "is_configured": true, 00:14:12.035 "data_offset": 0, 00:14:12.035 "data_size": 65536 00:14:12.035 }, 00:14:12.035 { 00:14:12.035 "name": "BaseBdev3", 00:14:12.035 "uuid": "838b0388-aa7c-451a-b7c9-3604e7cb8387", 00:14:12.035 "is_configured": true, 00:14:12.035 "data_offset": 0, 00:14:12.035 "data_size": 65536 00:14:12.035 }, 00:14:12.035 { 00:14:12.035 "name": "BaseBdev4", 00:14:12.035 "uuid": "5101297e-9601-4096-823c-6910cde8d962", 00:14:12.035 "is_configured": true, 00:14:12.035 "data_offset": 0, 00:14:12.035 "data_size": 65536 00:14:12.035 } 00:14:12.035 ] 00:14:12.035 }' 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.035 14:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.601 [2024-11-27 14:13:43.085492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.601 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.860 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.860 "name": "Existed_Raid", 00:14:12.860 "aliases": [ 00:14:12.860 "62e8bd17-4933-4756-964d-42b2ddabda87" 00:14:12.860 ], 00:14:12.860 "product_name": "Raid Volume", 00:14:12.860 "block_size": 512, 00:14:12.860 "num_blocks": 262144, 00:14:12.860 "uuid": "62e8bd17-4933-4756-964d-42b2ddabda87", 00:14:12.860 "assigned_rate_limits": { 00:14:12.860 "rw_ios_per_sec": 0, 00:14:12.860 "rw_mbytes_per_sec": 0, 00:14:12.860 "r_mbytes_per_sec": 0, 00:14:12.860 "w_mbytes_per_sec": 0 00:14:12.860 }, 00:14:12.860 "claimed": false, 00:14:12.860 "zoned": false, 00:14:12.860 "supported_io_types": { 00:14:12.860 "read": true, 00:14:12.860 "write": true, 00:14:12.860 "unmap": true, 00:14:12.860 "flush": true, 00:14:12.860 "reset": true, 00:14:12.860 "nvme_admin": false, 00:14:12.860 "nvme_io": false, 00:14:12.860 "nvme_io_md": false, 00:14:12.860 "write_zeroes": true, 00:14:12.860 "zcopy": false, 00:14:12.860 "get_zone_info": false, 00:14:12.860 "zone_management": false, 00:14:12.860 "zone_append": false, 00:14:12.860 "compare": false, 00:14:12.860 "compare_and_write": false, 00:14:12.860 "abort": false, 00:14:12.860 "seek_hole": false, 00:14:12.860 "seek_data": false, 00:14:12.860 "copy": false, 00:14:12.860 "nvme_iov_md": false 00:14:12.860 }, 00:14:12.860 "memory_domains": [ 00:14:12.860 { 00:14:12.860 "dma_device_id": "system", 00:14:12.860 "dma_device_type": 1 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.860 "dma_device_type": 2 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "system", 00:14:12.860 "dma_device_type": 1 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.860 "dma_device_type": 2 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "system", 00:14:12.860 "dma_device_type": 1 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.860 "dma_device_type": 2 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "system", 00:14:12.860 "dma_device_type": 1 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.860 "dma_device_type": 2 00:14:12.860 } 00:14:12.860 ], 00:14:12.860 "driver_specific": { 00:14:12.860 "raid": { 00:14:12.860 "uuid": "62e8bd17-4933-4756-964d-42b2ddabda87", 00:14:12.860 "strip_size_kb": 64, 00:14:12.860 "state": "online", 00:14:12.860 "raid_level": "raid0", 00:14:12.860 "superblock": false, 00:14:12.860 "num_base_bdevs": 4, 00:14:12.860 "num_base_bdevs_discovered": 4, 00:14:12.860 "num_base_bdevs_operational": 4, 00:14:12.860 "base_bdevs_list": [ 00:14:12.860 { 00:14:12.860 "name": "BaseBdev1", 00:14:12.860 "uuid": "aa6fa7fd-7e9f-4294-89d1-868e6c4e9699", 00:14:12.860 "is_configured": true, 00:14:12.860 "data_offset": 0, 00:14:12.860 "data_size": 65536 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "name": "BaseBdev2", 00:14:12.860 "uuid": "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3", 00:14:12.860 "is_configured": true, 00:14:12.860 "data_offset": 0, 00:14:12.860 "data_size": 65536 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "name": "BaseBdev3", 00:14:12.860 "uuid": "838b0388-aa7c-451a-b7c9-3604e7cb8387", 00:14:12.860 "is_configured": true, 00:14:12.860 "data_offset": 0, 00:14:12.860 "data_size": 65536 00:14:12.860 }, 00:14:12.860 { 00:14:12.860 "name": "BaseBdev4", 00:14:12.860 "uuid": "5101297e-9601-4096-823c-6910cde8d962", 00:14:12.860 "is_configured": true, 00:14:12.860 "data_offset": 0, 00:14:12.860 "data_size": 65536 00:14:12.860 } 00:14:12.861 ] 00:14:12.861 } 00:14:12.861 } 00:14:12.861 }' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:12.861 BaseBdev2 00:14:12.861 BaseBdev3 00:14:12.861 BaseBdev4' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.119 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.119 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.119 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.119 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:13.119 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.120 [2024-11-27 14:13:43.465262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.120 [2024-11-27 14:13:43.465349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.120 [2024-11-27 14:13:43.465458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.120 "name": "Existed_Raid", 00:14:13.120 "uuid": "62e8bd17-4933-4756-964d-42b2ddabda87", 00:14:13.120 "strip_size_kb": 64, 00:14:13.120 "state": "offline", 00:14:13.120 "raid_level": "raid0", 00:14:13.120 "superblock": false, 00:14:13.120 "num_base_bdevs": 4, 00:14:13.120 "num_base_bdevs_discovered": 3, 00:14:13.120 "num_base_bdevs_operational": 3, 00:14:13.120 "base_bdevs_list": [ 00:14:13.120 { 00:14:13.120 "name": null, 00:14:13.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.120 "is_configured": false, 00:14:13.120 "data_offset": 0, 00:14:13.120 "data_size": 65536 00:14:13.120 }, 00:14:13.120 { 00:14:13.120 "name": "BaseBdev2", 00:14:13.120 "uuid": "aa599ebf-ffb1-4ee2-8129-e8019ed1b1a3", 00:14:13.120 "is_configured": true, 00:14:13.120 "data_offset": 0, 00:14:13.120 "data_size": 65536 00:14:13.120 }, 00:14:13.120 { 00:14:13.120 "name": "BaseBdev3", 00:14:13.120 "uuid": "838b0388-aa7c-451a-b7c9-3604e7cb8387", 00:14:13.120 "is_configured": true, 00:14:13.120 "data_offset": 0, 00:14:13.120 "data_size": 65536 00:14:13.120 }, 00:14:13.120 { 00:14:13.120 "name": "BaseBdev4", 00:14:13.120 "uuid": "5101297e-9601-4096-823c-6910cde8d962", 00:14:13.120 "is_configured": true, 00:14:13.120 "data_offset": 0, 00:14:13.120 "data_size": 65536 00:14:13.120 } 00:14:13.120 ] 00:14:13.120 }' 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.120 14:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.687 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.687 [2024-11-27 14:13:44.153974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.946 [2024-11-27 14:13:44.302220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.946 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.946 [2024-11-27 14:13:44.450126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:13.946 [2024-11-27 14:13:44.450503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 BaseBdev2 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.205 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 [ 00:14:14.205 { 00:14:14.206 "name": "BaseBdev2", 00:14:14.206 "aliases": [ 00:14:14.206 "0443d85c-9d66-4011-98bc-6f665eeb804c" 00:14:14.206 ], 00:14:14.206 "product_name": "Malloc disk", 00:14:14.206 "block_size": 512, 00:14:14.206 "num_blocks": 65536, 00:14:14.206 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:14.206 "assigned_rate_limits": { 00:14:14.206 "rw_ios_per_sec": 0, 00:14:14.206 "rw_mbytes_per_sec": 0, 00:14:14.206 "r_mbytes_per_sec": 0, 00:14:14.206 "w_mbytes_per_sec": 0 00:14:14.206 }, 00:14:14.206 "claimed": false, 00:14:14.206 "zoned": false, 00:14:14.206 "supported_io_types": { 00:14:14.206 "read": true, 00:14:14.206 "write": true, 00:14:14.206 "unmap": true, 00:14:14.206 "flush": true, 00:14:14.206 "reset": true, 00:14:14.206 "nvme_admin": false, 00:14:14.206 "nvme_io": false, 00:14:14.206 "nvme_io_md": false, 00:14:14.206 "write_zeroes": true, 00:14:14.206 "zcopy": true, 00:14:14.206 "get_zone_info": false, 00:14:14.206 "zone_management": false, 00:14:14.206 "zone_append": false, 00:14:14.206 "compare": false, 00:14:14.206 "compare_and_write": false, 00:14:14.206 "abort": true, 00:14:14.206 "seek_hole": false, 00:14:14.206 "seek_data": false, 00:14:14.206 "copy": true, 00:14:14.206 "nvme_iov_md": false 00:14:14.206 }, 00:14:14.206 "memory_domains": [ 00:14:14.206 { 00:14:14.206 "dma_device_id": "system", 00:14:14.206 "dma_device_type": 1 00:14:14.206 }, 00:14:14.206 { 00:14:14.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.206 "dma_device_type": 2 00:14:14.206 } 00:14:14.206 ], 00:14:14.206 "driver_specific": {} 00:14:14.206 } 00:14:14.206 ] 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.206 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.465 BaseBdev3 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.465 [ 00:14:14.465 { 00:14:14.465 "name": "BaseBdev3", 00:14:14.465 "aliases": [ 00:14:14.465 "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65" 00:14:14.465 ], 00:14:14.465 "product_name": "Malloc disk", 00:14:14.465 "block_size": 512, 00:14:14.465 "num_blocks": 65536, 00:14:14.465 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:14.465 "assigned_rate_limits": { 00:14:14.465 "rw_ios_per_sec": 0, 00:14:14.465 "rw_mbytes_per_sec": 0, 00:14:14.465 "r_mbytes_per_sec": 0, 00:14:14.465 "w_mbytes_per_sec": 0 00:14:14.465 }, 00:14:14.465 "claimed": false, 00:14:14.465 "zoned": false, 00:14:14.465 "supported_io_types": { 00:14:14.465 "read": true, 00:14:14.465 "write": true, 00:14:14.465 "unmap": true, 00:14:14.465 "flush": true, 00:14:14.465 "reset": true, 00:14:14.465 "nvme_admin": false, 00:14:14.465 "nvme_io": false, 00:14:14.465 "nvme_io_md": false, 00:14:14.465 "write_zeroes": true, 00:14:14.465 "zcopy": true, 00:14:14.465 "get_zone_info": false, 00:14:14.465 "zone_management": false, 00:14:14.465 "zone_append": false, 00:14:14.465 "compare": false, 00:14:14.465 "compare_and_write": false, 00:14:14.465 "abort": true, 00:14:14.465 "seek_hole": false, 00:14:14.465 "seek_data": false, 00:14:14.465 "copy": true, 00:14:14.465 "nvme_iov_md": false 00:14:14.465 }, 00:14:14.465 "memory_domains": [ 00:14:14.465 { 00:14:14.465 "dma_device_id": "system", 00:14:14.465 "dma_device_type": 1 00:14:14.465 }, 00:14:14.465 { 00:14:14.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.465 "dma_device_type": 2 00:14:14.465 } 00:14:14.465 ], 00:14:14.465 "driver_specific": {} 00:14:14.465 } 00:14:14.465 ] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.465 BaseBdev4 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.465 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.465 [ 00:14:14.465 { 00:14:14.465 "name": "BaseBdev4", 00:14:14.465 "aliases": [ 00:14:14.465 "95c10d13-3d53-4b33-b690-5d97e3945281" 00:14:14.465 ], 00:14:14.465 "product_name": "Malloc disk", 00:14:14.465 "block_size": 512, 00:14:14.465 "num_blocks": 65536, 00:14:14.465 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:14.465 "assigned_rate_limits": { 00:14:14.465 "rw_ios_per_sec": 0, 00:14:14.465 "rw_mbytes_per_sec": 0, 00:14:14.465 "r_mbytes_per_sec": 0, 00:14:14.465 "w_mbytes_per_sec": 0 00:14:14.465 }, 00:14:14.465 "claimed": false, 00:14:14.465 "zoned": false, 00:14:14.465 "supported_io_types": { 00:14:14.465 "read": true, 00:14:14.465 "write": true, 00:14:14.465 "unmap": true, 00:14:14.465 "flush": true, 00:14:14.465 "reset": true, 00:14:14.465 "nvme_admin": false, 00:14:14.465 "nvme_io": false, 00:14:14.465 "nvme_io_md": false, 00:14:14.465 "write_zeroes": true, 00:14:14.465 "zcopy": true, 00:14:14.465 "get_zone_info": false, 00:14:14.465 "zone_management": false, 00:14:14.465 "zone_append": false, 00:14:14.465 "compare": false, 00:14:14.465 "compare_and_write": false, 00:14:14.465 "abort": true, 00:14:14.465 "seek_hole": false, 00:14:14.466 "seek_data": false, 00:14:14.466 "copy": true, 00:14:14.466 "nvme_iov_md": false 00:14:14.466 }, 00:14:14.466 "memory_domains": [ 00:14:14.466 { 00:14:14.466 "dma_device_id": "system", 00:14:14.466 "dma_device_type": 1 00:14:14.466 }, 00:14:14.466 { 00:14:14.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.466 "dma_device_type": 2 00:14:14.466 } 00:14:14.466 ], 00:14:14.466 "driver_specific": {} 00:14:14.466 } 00:14:14.466 ] 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.466 [2024-11-27 14:13:44.846212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.466 [2024-11-27 14:13:44.846299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.466 [2024-11-27 14:13:44.846338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.466 [2024-11-27 14:13:44.848902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.466 [2024-11-27 14:13:44.848985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.466 "name": "Existed_Raid", 00:14:14.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.466 "strip_size_kb": 64, 00:14:14.466 "state": "configuring", 00:14:14.466 "raid_level": "raid0", 00:14:14.466 "superblock": false, 00:14:14.466 "num_base_bdevs": 4, 00:14:14.466 "num_base_bdevs_discovered": 3, 00:14:14.466 "num_base_bdevs_operational": 4, 00:14:14.466 "base_bdevs_list": [ 00:14:14.466 { 00:14:14.466 "name": "BaseBdev1", 00:14:14.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.466 "is_configured": false, 00:14:14.466 "data_offset": 0, 00:14:14.466 "data_size": 0 00:14:14.466 }, 00:14:14.466 { 00:14:14.466 "name": "BaseBdev2", 00:14:14.466 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:14.466 "is_configured": true, 00:14:14.466 "data_offset": 0, 00:14:14.466 "data_size": 65536 00:14:14.466 }, 00:14:14.466 { 00:14:14.466 "name": "BaseBdev3", 00:14:14.466 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:14.466 "is_configured": true, 00:14:14.466 "data_offset": 0, 00:14:14.466 "data_size": 65536 00:14:14.466 }, 00:14:14.466 { 00:14:14.466 "name": "BaseBdev4", 00:14:14.466 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:14.466 "is_configured": true, 00:14:14.466 "data_offset": 0, 00:14:14.466 "data_size": 65536 00:14:14.466 } 00:14:14.466 ] 00:14:14.466 }' 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.466 14:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.033 [2024-11-27 14:13:45.390596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.033 "name": "Existed_Raid", 00:14:15.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.033 "strip_size_kb": 64, 00:14:15.033 "state": "configuring", 00:14:15.033 "raid_level": "raid0", 00:14:15.033 "superblock": false, 00:14:15.033 "num_base_bdevs": 4, 00:14:15.033 "num_base_bdevs_discovered": 2, 00:14:15.033 "num_base_bdevs_operational": 4, 00:14:15.033 "base_bdevs_list": [ 00:14:15.033 { 00:14:15.033 "name": "BaseBdev1", 00:14:15.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.033 "is_configured": false, 00:14:15.033 "data_offset": 0, 00:14:15.033 "data_size": 0 00:14:15.033 }, 00:14:15.033 { 00:14:15.033 "name": null, 00:14:15.033 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:15.033 "is_configured": false, 00:14:15.033 "data_offset": 0, 00:14:15.033 "data_size": 65536 00:14:15.033 }, 00:14:15.033 { 00:14:15.033 "name": "BaseBdev3", 00:14:15.033 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:15.033 "is_configured": true, 00:14:15.033 "data_offset": 0, 00:14:15.033 "data_size": 65536 00:14:15.033 }, 00:14:15.033 { 00:14:15.033 "name": "BaseBdev4", 00:14:15.033 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:15.033 "is_configured": true, 00:14:15.033 "data_offset": 0, 00:14:15.033 "data_size": 65536 00:14:15.033 } 00:14:15.033 ] 00:14:15.033 }' 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.033 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.600 14:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.600 [2024-11-27 14:13:46.024176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.600 BaseBdev1 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.600 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.600 [ 00:14:15.600 { 00:14:15.600 "name": "BaseBdev1", 00:14:15.600 "aliases": [ 00:14:15.600 "308b0ab2-6561-48fa-994e-d836d1c0e8a4" 00:14:15.600 ], 00:14:15.600 "product_name": "Malloc disk", 00:14:15.600 "block_size": 512, 00:14:15.600 "num_blocks": 65536, 00:14:15.600 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:15.600 "assigned_rate_limits": { 00:14:15.600 "rw_ios_per_sec": 0, 00:14:15.600 "rw_mbytes_per_sec": 0, 00:14:15.600 "r_mbytes_per_sec": 0, 00:14:15.600 "w_mbytes_per_sec": 0 00:14:15.600 }, 00:14:15.600 "claimed": true, 00:14:15.601 "claim_type": "exclusive_write", 00:14:15.601 "zoned": false, 00:14:15.601 "supported_io_types": { 00:14:15.601 "read": true, 00:14:15.601 "write": true, 00:14:15.601 "unmap": true, 00:14:15.601 "flush": true, 00:14:15.601 "reset": true, 00:14:15.601 "nvme_admin": false, 00:14:15.601 "nvme_io": false, 00:14:15.601 "nvme_io_md": false, 00:14:15.601 "write_zeroes": true, 00:14:15.601 "zcopy": true, 00:14:15.601 "get_zone_info": false, 00:14:15.601 "zone_management": false, 00:14:15.601 "zone_append": false, 00:14:15.601 "compare": false, 00:14:15.601 "compare_and_write": false, 00:14:15.601 "abort": true, 00:14:15.601 "seek_hole": false, 00:14:15.601 "seek_data": false, 00:14:15.601 "copy": true, 00:14:15.601 "nvme_iov_md": false 00:14:15.601 }, 00:14:15.601 "memory_domains": [ 00:14:15.601 { 00:14:15.601 "dma_device_id": "system", 00:14:15.601 "dma_device_type": 1 00:14:15.601 }, 00:14:15.601 { 00:14:15.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.601 "dma_device_type": 2 00:14:15.601 } 00:14:15.601 ], 00:14:15.601 "driver_specific": {} 00:14:15.601 } 00:14:15.601 ] 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.601 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.860 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.860 "name": "Existed_Raid", 00:14:15.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.860 "strip_size_kb": 64, 00:14:15.860 "state": "configuring", 00:14:15.860 "raid_level": "raid0", 00:14:15.860 "superblock": false, 00:14:15.860 "num_base_bdevs": 4, 00:14:15.860 "num_base_bdevs_discovered": 3, 00:14:15.860 "num_base_bdevs_operational": 4, 00:14:15.860 "base_bdevs_list": [ 00:14:15.860 { 00:14:15.860 "name": "BaseBdev1", 00:14:15.860 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:15.860 "is_configured": true, 00:14:15.860 "data_offset": 0, 00:14:15.860 "data_size": 65536 00:14:15.860 }, 00:14:15.860 { 00:14:15.860 "name": null, 00:14:15.860 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:15.860 "is_configured": false, 00:14:15.860 "data_offset": 0, 00:14:15.860 "data_size": 65536 00:14:15.860 }, 00:14:15.860 { 00:14:15.860 "name": "BaseBdev3", 00:14:15.860 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:15.860 "is_configured": true, 00:14:15.860 "data_offset": 0, 00:14:15.860 "data_size": 65536 00:14:15.860 }, 00:14:15.860 { 00:14:15.860 "name": "BaseBdev4", 00:14:15.860 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:15.860 "is_configured": true, 00:14:15.860 "data_offset": 0, 00:14:15.860 "data_size": 65536 00:14:15.860 } 00:14:15.860 ] 00:14:15.860 }' 00:14:15.860 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.860 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.118 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.118 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.118 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.118 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.118 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.411 [2024-11-27 14:13:46.660497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.411 "name": "Existed_Raid", 00:14:16.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.411 "strip_size_kb": 64, 00:14:16.411 "state": "configuring", 00:14:16.411 "raid_level": "raid0", 00:14:16.411 "superblock": false, 00:14:16.411 "num_base_bdevs": 4, 00:14:16.411 "num_base_bdevs_discovered": 2, 00:14:16.411 "num_base_bdevs_operational": 4, 00:14:16.411 "base_bdevs_list": [ 00:14:16.411 { 00:14:16.411 "name": "BaseBdev1", 00:14:16.411 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:16.411 "is_configured": true, 00:14:16.411 "data_offset": 0, 00:14:16.411 "data_size": 65536 00:14:16.411 }, 00:14:16.411 { 00:14:16.411 "name": null, 00:14:16.411 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:16.411 "is_configured": false, 00:14:16.411 "data_offset": 0, 00:14:16.411 "data_size": 65536 00:14:16.411 }, 00:14:16.411 { 00:14:16.411 "name": null, 00:14:16.411 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:16.411 "is_configured": false, 00:14:16.411 "data_offset": 0, 00:14:16.411 "data_size": 65536 00:14:16.411 }, 00:14:16.411 { 00:14:16.411 "name": "BaseBdev4", 00:14:16.411 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:16.411 "is_configured": true, 00:14:16.411 "data_offset": 0, 00:14:16.411 "data_size": 65536 00:14:16.411 } 00:14:16.411 ] 00:14:16.411 }' 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.411 14:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.994 [2024-11-27 14:13:47.252631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.994 "name": "Existed_Raid", 00:14:16.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.994 "strip_size_kb": 64, 00:14:16.994 "state": "configuring", 00:14:16.994 "raid_level": "raid0", 00:14:16.994 "superblock": false, 00:14:16.994 "num_base_bdevs": 4, 00:14:16.994 "num_base_bdevs_discovered": 3, 00:14:16.994 "num_base_bdevs_operational": 4, 00:14:16.994 "base_bdevs_list": [ 00:14:16.994 { 00:14:16.994 "name": "BaseBdev1", 00:14:16.994 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:16.994 "is_configured": true, 00:14:16.994 "data_offset": 0, 00:14:16.994 "data_size": 65536 00:14:16.994 }, 00:14:16.994 { 00:14:16.994 "name": null, 00:14:16.994 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:16.994 "is_configured": false, 00:14:16.994 "data_offset": 0, 00:14:16.994 "data_size": 65536 00:14:16.994 }, 00:14:16.994 { 00:14:16.994 "name": "BaseBdev3", 00:14:16.994 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:16.994 "is_configured": true, 00:14:16.994 "data_offset": 0, 00:14:16.994 "data_size": 65536 00:14:16.994 }, 00:14:16.994 { 00:14:16.994 "name": "BaseBdev4", 00:14:16.994 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:16.994 "is_configured": true, 00:14:16.994 "data_offset": 0, 00:14:16.994 "data_size": 65536 00:14:16.994 } 00:14:16.994 ] 00:14:16.994 }' 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.994 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.562 [2024-11-27 14:13:47.852891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.562 14:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.562 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.562 "name": "Existed_Raid", 00:14:17.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.562 "strip_size_kb": 64, 00:14:17.562 "state": "configuring", 00:14:17.562 "raid_level": "raid0", 00:14:17.562 "superblock": false, 00:14:17.562 "num_base_bdevs": 4, 00:14:17.562 "num_base_bdevs_discovered": 2, 00:14:17.562 "num_base_bdevs_operational": 4, 00:14:17.562 "base_bdevs_list": [ 00:14:17.562 { 00:14:17.562 "name": null, 00:14:17.562 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:17.562 "is_configured": false, 00:14:17.562 "data_offset": 0, 00:14:17.562 "data_size": 65536 00:14:17.562 }, 00:14:17.562 { 00:14:17.562 "name": null, 00:14:17.562 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:17.562 "is_configured": false, 00:14:17.562 "data_offset": 0, 00:14:17.562 "data_size": 65536 00:14:17.562 }, 00:14:17.562 { 00:14:17.562 "name": "BaseBdev3", 00:14:17.562 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:17.562 "is_configured": true, 00:14:17.562 "data_offset": 0, 00:14:17.562 "data_size": 65536 00:14:17.562 }, 00:14:17.562 { 00:14:17.562 "name": "BaseBdev4", 00:14:17.562 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:17.562 "is_configured": true, 00:14:17.562 "data_offset": 0, 00:14:17.562 "data_size": 65536 00:14:17.562 } 00:14:17.562 ] 00:14:17.562 }' 00:14:17.562 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.562 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.128 [2024-11-27 14:13:48.543830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.128 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.128 "name": "Existed_Raid", 00:14:18.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.128 "strip_size_kb": 64, 00:14:18.128 "state": "configuring", 00:14:18.128 "raid_level": "raid0", 00:14:18.128 "superblock": false, 00:14:18.128 "num_base_bdevs": 4, 00:14:18.128 "num_base_bdevs_discovered": 3, 00:14:18.128 "num_base_bdevs_operational": 4, 00:14:18.128 "base_bdevs_list": [ 00:14:18.128 { 00:14:18.128 "name": null, 00:14:18.128 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:18.128 "is_configured": false, 00:14:18.128 "data_offset": 0, 00:14:18.128 "data_size": 65536 00:14:18.128 }, 00:14:18.128 { 00:14:18.129 "name": "BaseBdev2", 00:14:18.129 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:18.129 "is_configured": true, 00:14:18.129 "data_offset": 0, 00:14:18.129 "data_size": 65536 00:14:18.129 }, 00:14:18.129 { 00:14:18.129 "name": "BaseBdev3", 00:14:18.129 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:18.129 "is_configured": true, 00:14:18.129 "data_offset": 0, 00:14:18.129 "data_size": 65536 00:14:18.129 }, 00:14:18.129 { 00:14:18.129 "name": "BaseBdev4", 00:14:18.129 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:18.129 "is_configured": true, 00:14:18.129 "data_offset": 0, 00:14:18.129 "data_size": 65536 00:14:18.129 } 00:14:18.129 ] 00:14:18.129 }' 00:14:18.129 14:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.129 14:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 308b0ab2-6561-48fa-994e-d836d1c0e8a4 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.694 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.950 [2024-11-27 14:13:49.215624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:18.950 [2024-11-27 14:13:49.215731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:18.950 [2024-11-27 14:13:49.215751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:18.950 [2024-11-27 14:13:49.216220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:18.951 [2024-11-27 14:13:49.216460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:18.951 [2024-11-27 14:13:49.216499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:18.951 [2024-11-27 14:13:49.216877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.951 NewBaseBdev 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.951 [ 00:14:18.951 { 00:14:18.951 "name": "NewBaseBdev", 00:14:18.951 "aliases": [ 00:14:18.951 "308b0ab2-6561-48fa-994e-d836d1c0e8a4" 00:14:18.951 ], 00:14:18.951 "product_name": "Malloc disk", 00:14:18.951 "block_size": 512, 00:14:18.951 "num_blocks": 65536, 00:14:18.951 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:18.951 "assigned_rate_limits": { 00:14:18.951 "rw_ios_per_sec": 0, 00:14:18.951 "rw_mbytes_per_sec": 0, 00:14:18.951 "r_mbytes_per_sec": 0, 00:14:18.951 "w_mbytes_per_sec": 0 00:14:18.951 }, 00:14:18.951 "claimed": true, 00:14:18.951 "claim_type": "exclusive_write", 00:14:18.951 "zoned": false, 00:14:18.951 "supported_io_types": { 00:14:18.951 "read": true, 00:14:18.951 "write": true, 00:14:18.951 "unmap": true, 00:14:18.951 "flush": true, 00:14:18.951 "reset": true, 00:14:18.951 "nvme_admin": false, 00:14:18.951 "nvme_io": false, 00:14:18.951 "nvme_io_md": false, 00:14:18.951 "write_zeroes": true, 00:14:18.951 "zcopy": true, 00:14:18.951 "get_zone_info": false, 00:14:18.951 "zone_management": false, 00:14:18.951 "zone_append": false, 00:14:18.951 "compare": false, 00:14:18.951 "compare_and_write": false, 00:14:18.951 "abort": true, 00:14:18.951 "seek_hole": false, 00:14:18.951 "seek_data": false, 00:14:18.951 "copy": true, 00:14:18.951 "nvme_iov_md": false 00:14:18.951 }, 00:14:18.951 "memory_domains": [ 00:14:18.951 { 00:14:18.951 "dma_device_id": "system", 00:14:18.951 "dma_device_type": 1 00:14:18.951 }, 00:14:18.951 { 00:14:18.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.951 "dma_device_type": 2 00:14:18.951 } 00:14:18.951 ], 00:14:18.951 "driver_specific": {} 00:14:18.951 } 00:14:18.951 ] 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.951 "name": "Existed_Raid", 00:14:18.951 "uuid": "2d02aa94-1b16-4e07-aaf8-1c3f9de64c94", 00:14:18.951 "strip_size_kb": 64, 00:14:18.951 "state": "online", 00:14:18.951 "raid_level": "raid0", 00:14:18.951 "superblock": false, 00:14:18.951 "num_base_bdevs": 4, 00:14:18.951 "num_base_bdevs_discovered": 4, 00:14:18.951 "num_base_bdevs_operational": 4, 00:14:18.951 "base_bdevs_list": [ 00:14:18.951 { 00:14:18.951 "name": "NewBaseBdev", 00:14:18.951 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:18.951 "is_configured": true, 00:14:18.951 "data_offset": 0, 00:14:18.951 "data_size": 65536 00:14:18.951 }, 00:14:18.951 { 00:14:18.951 "name": "BaseBdev2", 00:14:18.951 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:18.951 "is_configured": true, 00:14:18.951 "data_offset": 0, 00:14:18.951 "data_size": 65536 00:14:18.951 }, 00:14:18.951 { 00:14:18.951 "name": "BaseBdev3", 00:14:18.951 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:18.951 "is_configured": true, 00:14:18.951 "data_offset": 0, 00:14:18.951 "data_size": 65536 00:14:18.951 }, 00:14:18.951 { 00:14:18.951 "name": "BaseBdev4", 00:14:18.951 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:18.951 "is_configured": true, 00:14:18.951 "data_offset": 0, 00:14:18.951 "data_size": 65536 00:14:18.951 } 00:14:18.951 ] 00:14:18.951 }' 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.951 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.516 [2024-11-27 14:13:49.760366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.516 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.516 "name": "Existed_Raid", 00:14:19.516 "aliases": [ 00:14:19.516 "2d02aa94-1b16-4e07-aaf8-1c3f9de64c94" 00:14:19.516 ], 00:14:19.516 "product_name": "Raid Volume", 00:14:19.516 "block_size": 512, 00:14:19.516 "num_blocks": 262144, 00:14:19.516 "uuid": "2d02aa94-1b16-4e07-aaf8-1c3f9de64c94", 00:14:19.516 "assigned_rate_limits": { 00:14:19.516 "rw_ios_per_sec": 0, 00:14:19.516 "rw_mbytes_per_sec": 0, 00:14:19.516 "r_mbytes_per_sec": 0, 00:14:19.516 "w_mbytes_per_sec": 0 00:14:19.516 }, 00:14:19.516 "claimed": false, 00:14:19.516 "zoned": false, 00:14:19.516 "supported_io_types": { 00:14:19.516 "read": true, 00:14:19.516 "write": true, 00:14:19.516 "unmap": true, 00:14:19.516 "flush": true, 00:14:19.516 "reset": true, 00:14:19.516 "nvme_admin": false, 00:14:19.516 "nvme_io": false, 00:14:19.516 "nvme_io_md": false, 00:14:19.516 "write_zeroes": true, 00:14:19.516 "zcopy": false, 00:14:19.516 "get_zone_info": false, 00:14:19.516 "zone_management": false, 00:14:19.516 "zone_append": false, 00:14:19.516 "compare": false, 00:14:19.516 "compare_and_write": false, 00:14:19.516 "abort": false, 00:14:19.516 "seek_hole": false, 00:14:19.516 "seek_data": false, 00:14:19.516 "copy": false, 00:14:19.516 "nvme_iov_md": false 00:14:19.516 }, 00:14:19.516 "memory_domains": [ 00:14:19.516 { 00:14:19.516 "dma_device_id": "system", 00:14:19.516 "dma_device_type": 1 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.516 "dma_device_type": 2 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "system", 00:14:19.516 "dma_device_type": 1 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.516 "dma_device_type": 2 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "system", 00:14:19.516 "dma_device_type": 1 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.516 "dma_device_type": 2 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "system", 00:14:19.516 "dma_device_type": 1 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.516 "dma_device_type": 2 00:14:19.516 } 00:14:19.516 ], 00:14:19.516 "driver_specific": { 00:14:19.516 "raid": { 00:14:19.516 "uuid": "2d02aa94-1b16-4e07-aaf8-1c3f9de64c94", 00:14:19.516 "strip_size_kb": 64, 00:14:19.516 "state": "online", 00:14:19.516 "raid_level": "raid0", 00:14:19.516 "superblock": false, 00:14:19.516 "num_base_bdevs": 4, 00:14:19.516 "num_base_bdevs_discovered": 4, 00:14:19.516 "num_base_bdevs_operational": 4, 00:14:19.516 "base_bdevs_list": [ 00:14:19.516 { 00:14:19.516 "name": "NewBaseBdev", 00:14:19.516 "uuid": "308b0ab2-6561-48fa-994e-d836d1c0e8a4", 00:14:19.516 "is_configured": true, 00:14:19.516 "data_offset": 0, 00:14:19.516 "data_size": 65536 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "name": "BaseBdev2", 00:14:19.516 "uuid": "0443d85c-9d66-4011-98bc-6f665eeb804c", 00:14:19.516 "is_configured": true, 00:14:19.516 "data_offset": 0, 00:14:19.516 "data_size": 65536 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "name": "BaseBdev3", 00:14:19.516 "uuid": "f4bb81c0-8a6a-4dc5-8d70-8913f1083d65", 00:14:19.516 "is_configured": true, 00:14:19.516 "data_offset": 0, 00:14:19.516 "data_size": 65536 00:14:19.516 }, 00:14:19.516 { 00:14:19.516 "name": "BaseBdev4", 00:14:19.516 "uuid": "95c10d13-3d53-4b33-b690-5d97e3945281", 00:14:19.517 "is_configured": true, 00:14:19.517 "data_offset": 0, 00:14:19.517 "data_size": 65536 00:14:19.517 } 00:14:19.517 ] 00:14:19.517 } 00:14:19.517 } 00:14:19.517 }' 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:19.517 BaseBdev2 00:14:19.517 BaseBdev3 00:14:19.517 BaseBdev4' 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.517 14:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.517 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.775 [2024-11-27 14:13:50.167981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.775 [2024-11-27 14:13:50.168058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.775 [2024-11-27 14:13:50.168193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.775 [2024-11-27 14:13:50.168310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.775 [2024-11-27 14:13:50.168333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69637 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69637 ']' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69637 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69637 00:14:19.775 killing process with pid 69637 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69637' 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69637 00:14:19.775 [2024-11-27 14:13:50.206304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.775 14:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69637 00:14:20.343 [2024-11-27 14:13:50.590363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.277 ************************************ 00:14:21.277 END TEST raid_state_function_test 00:14:21.277 ************************************ 00:14:21.277 14:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:21.277 00:14:21.277 real 0m13.403s 00:14:21.277 user 0m22.021s 00:14:21.277 sys 0m1.917s 00:14:21.277 14:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.277 14:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.535 14:13:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:21.535 14:13:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:21.535 14:13:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.535 14:13:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.535 ************************************ 00:14:21.535 START TEST raid_state_function_test_sb 00:14:21.535 ************************************ 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:21.535 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70331 00:14:21.536 Process raid pid: 70331 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70331' 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70331 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70331 ']' 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.536 14:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.536 [2024-11-27 14:13:51.918108] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:21.536 [2024-11-27 14:13:51.918302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.794 [2024-11-27 14:13:52.101122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.794 [2024-11-27 14:13:52.250295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.052 [2024-11-27 14:13:52.481548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.052 [2024-11-27 14:13:52.481626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.627 [2024-11-27 14:13:52.884101] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.627 [2024-11-27 14:13:52.884277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.627 [2024-11-27 14:13:52.884303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.627 [2024-11-27 14:13:52.884332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.627 [2024-11-27 14:13:52.884350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.627 [2024-11-27 14:13:52.884391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.627 [2024-11-27 14:13:52.884425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.627 [2024-11-27 14:13:52.884453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.627 "name": "Existed_Raid", 00:14:22.627 "uuid": "ce389d1c-46ab-451f-a80e-1d8106aa125f", 00:14:22.627 "strip_size_kb": 64, 00:14:22.627 "state": "configuring", 00:14:22.627 "raid_level": "raid0", 00:14:22.627 "superblock": true, 00:14:22.627 "num_base_bdevs": 4, 00:14:22.627 "num_base_bdevs_discovered": 0, 00:14:22.627 "num_base_bdevs_operational": 4, 00:14:22.627 "base_bdevs_list": [ 00:14:22.627 { 00:14:22.627 "name": "BaseBdev1", 00:14:22.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.627 "is_configured": false, 00:14:22.627 "data_offset": 0, 00:14:22.627 "data_size": 0 00:14:22.627 }, 00:14:22.627 { 00:14:22.627 "name": "BaseBdev2", 00:14:22.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.627 "is_configured": false, 00:14:22.627 "data_offset": 0, 00:14:22.627 "data_size": 0 00:14:22.627 }, 00:14:22.627 { 00:14:22.627 "name": "BaseBdev3", 00:14:22.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.627 "is_configured": false, 00:14:22.627 "data_offset": 0, 00:14:22.627 "data_size": 0 00:14:22.627 }, 00:14:22.627 { 00:14:22.627 "name": "BaseBdev4", 00:14:22.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.627 "is_configured": false, 00:14:22.627 "data_offset": 0, 00:14:22.627 "data_size": 0 00:14:22.627 } 00:14:22.627 ] 00:14:22.627 }' 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.627 14:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.193 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.193 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.193 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.193 [2024-11-27 14:13:53.404004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.193 [2024-11-27 14:13:53.404314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.194 [2024-11-27 14:13:53.411979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.194 [2024-11-27 14:13:53.412034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.194 [2024-11-27 14:13:53.412050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.194 [2024-11-27 14:13:53.412067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.194 [2024-11-27 14:13:53.412077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.194 [2024-11-27 14:13:53.412092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.194 [2024-11-27 14:13:53.412101] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:23.194 [2024-11-27 14:13:53.412115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.194 [2024-11-27 14:13:53.460924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.194 BaseBdev1 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.194 [ 00:14:23.194 { 00:14:23.194 "name": "BaseBdev1", 00:14:23.194 "aliases": [ 00:14:23.194 "0465c428-8661-4016-87f0-9059769b75b7" 00:14:23.194 ], 00:14:23.194 "product_name": "Malloc disk", 00:14:23.194 "block_size": 512, 00:14:23.194 "num_blocks": 65536, 00:14:23.194 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:23.194 "assigned_rate_limits": { 00:14:23.194 "rw_ios_per_sec": 0, 00:14:23.194 "rw_mbytes_per_sec": 0, 00:14:23.194 "r_mbytes_per_sec": 0, 00:14:23.194 "w_mbytes_per_sec": 0 00:14:23.194 }, 00:14:23.194 "claimed": true, 00:14:23.194 "claim_type": "exclusive_write", 00:14:23.194 "zoned": false, 00:14:23.194 "supported_io_types": { 00:14:23.194 "read": true, 00:14:23.194 "write": true, 00:14:23.194 "unmap": true, 00:14:23.194 "flush": true, 00:14:23.194 "reset": true, 00:14:23.194 "nvme_admin": false, 00:14:23.194 "nvme_io": false, 00:14:23.194 "nvme_io_md": false, 00:14:23.194 "write_zeroes": true, 00:14:23.194 "zcopy": true, 00:14:23.194 "get_zone_info": false, 00:14:23.194 "zone_management": false, 00:14:23.194 "zone_append": false, 00:14:23.194 "compare": false, 00:14:23.194 "compare_and_write": false, 00:14:23.194 "abort": true, 00:14:23.194 "seek_hole": false, 00:14:23.194 "seek_data": false, 00:14:23.194 "copy": true, 00:14:23.194 "nvme_iov_md": false 00:14:23.194 }, 00:14:23.194 "memory_domains": [ 00:14:23.194 { 00:14:23.194 "dma_device_id": "system", 00:14:23.194 "dma_device_type": 1 00:14:23.194 }, 00:14:23.194 { 00:14:23.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.194 "dma_device_type": 2 00:14:23.194 } 00:14:23.194 ], 00:14:23.194 "driver_specific": {} 00:14:23.194 } 00:14:23.194 ] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.194 "name": "Existed_Raid", 00:14:23.194 "uuid": "c55288c4-a0c9-4f31-b559-9b9d4eef7860", 00:14:23.194 "strip_size_kb": 64, 00:14:23.194 "state": "configuring", 00:14:23.194 "raid_level": "raid0", 00:14:23.194 "superblock": true, 00:14:23.194 "num_base_bdevs": 4, 00:14:23.194 "num_base_bdevs_discovered": 1, 00:14:23.194 "num_base_bdevs_operational": 4, 00:14:23.194 "base_bdevs_list": [ 00:14:23.194 { 00:14:23.194 "name": "BaseBdev1", 00:14:23.194 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:23.194 "is_configured": true, 00:14:23.194 "data_offset": 2048, 00:14:23.194 "data_size": 63488 00:14:23.194 }, 00:14:23.194 { 00:14:23.194 "name": "BaseBdev2", 00:14:23.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.194 "is_configured": false, 00:14:23.194 "data_offset": 0, 00:14:23.194 "data_size": 0 00:14:23.194 }, 00:14:23.194 { 00:14:23.194 "name": "BaseBdev3", 00:14:23.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.194 "is_configured": false, 00:14:23.194 "data_offset": 0, 00:14:23.194 "data_size": 0 00:14:23.194 }, 00:14:23.194 { 00:14:23.194 "name": "BaseBdev4", 00:14:23.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.194 "is_configured": false, 00:14:23.194 "data_offset": 0, 00:14:23.194 "data_size": 0 00:14:23.194 } 00:14:23.194 ] 00:14:23.194 }' 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.194 14:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.760 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.760 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.760 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.760 [2024-11-27 14:13:54.021137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.760 [2024-11-27 14:13:54.021222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:23.760 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.760 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:23.760 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.761 [2024-11-27 14:13:54.029191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.761 [2024-11-27 14:13:54.031762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.761 [2024-11-27 14:13:54.031832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.761 [2024-11-27 14:13:54.031850] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.761 [2024-11-27 14:13:54.031869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.761 [2024-11-27 14:13:54.031880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:23.761 [2024-11-27 14:13:54.031905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.761 "name": "Existed_Raid", 00:14:23.761 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:23.761 "strip_size_kb": 64, 00:14:23.761 "state": "configuring", 00:14:23.761 "raid_level": "raid0", 00:14:23.761 "superblock": true, 00:14:23.761 "num_base_bdevs": 4, 00:14:23.761 "num_base_bdevs_discovered": 1, 00:14:23.761 "num_base_bdevs_operational": 4, 00:14:23.761 "base_bdevs_list": [ 00:14:23.761 { 00:14:23.761 "name": "BaseBdev1", 00:14:23.761 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:23.761 "is_configured": true, 00:14:23.761 "data_offset": 2048, 00:14:23.761 "data_size": 63488 00:14:23.761 }, 00:14:23.761 { 00:14:23.761 "name": "BaseBdev2", 00:14:23.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.761 "is_configured": false, 00:14:23.761 "data_offset": 0, 00:14:23.761 "data_size": 0 00:14:23.761 }, 00:14:23.761 { 00:14:23.761 "name": "BaseBdev3", 00:14:23.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.761 "is_configured": false, 00:14:23.761 "data_offset": 0, 00:14:23.761 "data_size": 0 00:14:23.761 }, 00:14:23.761 { 00:14:23.761 "name": "BaseBdev4", 00:14:23.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.761 "is_configured": false, 00:14:23.761 "data_offset": 0, 00:14:23.761 "data_size": 0 00:14:23.761 } 00:14:23.761 ] 00:14:23.761 }' 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.761 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.084 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.084 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.084 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.357 [2024-11-27 14:13:54.591766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.357 BaseBdev2 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.357 [ 00:14:24.357 { 00:14:24.357 "name": "BaseBdev2", 00:14:24.357 "aliases": [ 00:14:24.357 "cfc80703-c970-42ac-8b7f-d246da42528d" 00:14:24.357 ], 00:14:24.357 "product_name": "Malloc disk", 00:14:24.357 "block_size": 512, 00:14:24.357 "num_blocks": 65536, 00:14:24.357 "uuid": "cfc80703-c970-42ac-8b7f-d246da42528d", 00:14:24.357 "assigned_rate_limits": { 00:14:24.357 "rw_ios_per_sec": 0, 00:14:24.357 "rw_mbytes_per_sec": 0, 00:14:24.357 "r_mbytes_per_sec": 0, 00:14:24.357 "w_mbytes_per_sec": 0 00:14:24.357 }, 00:14:24.357 "claimed": true, 00:14:24.357 "claim_type": "exclusive_write", 00:14:24.357 "zoned": false, 00:14:24.357 "supported_io_types": { 00:14:24.357 "read": true, 00:14:24.357 "write": true, 00:14:24.357 "unmap": true, 00:14:24.357 "flush": true, 00:14:24.357 "reset": true, 00:14:24.357 "nvme_admin": false, 00:14:24.357 "nvme_io": false, 00:14:24.357 "nvme_io_md": false, 00:14:24.357 "write_zeroes": true, 00:14:24.357 "zcopy": true, 00:14:24.357 "get_zone_info": false, 00:14:24.357 "zone_management": false, 00:14:24.357 "zone_append": false, 00:14:24.357 "compare": false, 00:14:24.357 "compare_and_write": false, 00:14:24.357 "abort": true, 00:14:24.357 "seek_hole": false, 00:14:24.357 "seek_data": false, 00:14:24.357 "copy": true, 00:14:24.357 "nvme_iov_md": false 00:14:24.357 }, 00:14:24.357 "memory_domains": [ 00:14:24.357 { 00:14:24.357 "dma_device_id": "system", 00:14:24.357 "dma_device_type": 1 00:14:24.357 }, 00:14:24.357 { 00:14:24.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.357 "dma_device_type": 2 00:14:24.357 } 00:14:24.357 ], 00:14:24.357 "driver_specific": {} 00:14:24.357 } 00:14:24.357 ] 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.357 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.357 "name": "Existed_Raid", 00:14:24.357 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:24.357 "strip_size_kb": 64, 00:14:24.357 "state": "configuring", 00:14:24.357 "raid_level": "raid0", 00:14:24.357 "superblock": true, 00:14:24.357 "num_base_bdevs": 4, 00:14:24.357 "num_base_bdevs_discovered": 2, 00:14:24.357 "num_base_bdevs_operational": 4, 00:14:24.357 "base_bdevs_list": [ 00:14:24.357 { 00:14:24.357 "name": "BaseBdev1", 00:14:24.357 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:24.357 "is_configured": true, 00:14:24.357 "data_offset": 2048, 00:14:24.357 "data_size": 63488 00:14:24.357 }, 00:14:24.357 { 00:14:24.357 "name": "BaseBdev2", 00:14:24.357 "uuid": "cfc80703-c970-42ac-8b7f-d246da42528d", 00:14:24.357 "is_configured": true, 00:14:24.357 "data_offset": 2048, 00:14:24.357 "data_size": 63488 00:14:24.357 }, 00:14:24.357 { 00:14:24.357 "name": "BaseBdev3", 00:14:24.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.357 "is_configured": false, 00:14:24.357 "data_offset": 0, 00:14:24.358 "data_size": 0 00:14:24.358 }, 00:14:24.358 { 00:14:24.358 "name": "BaseBdev4", 00:14:24.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.358 "is_configured": false, 00:14:24.358 "data_offset": 0, 00:14:24.358 "data_size": 0 00:14:24.358 } 00:14:24.358 ] 00:14:24.358 }' 00:14:24.358 14:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.358 14:13:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.924 [2024-11-27 14:13:55.213140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.924 BaseBdev3 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.924 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.924 [ 00:14:24.924 { 00:14:24.924 "name": "BaseBdev3", 00:14:24.924 "aliases": [ 00:14:24.924 "85288f3b-ac0b-4f1c-8ceb-9864a39a6aca" 00:14:24.924 ], 00:14:24.924 "product_name": "Malloc disk", 00:14:24.924 "block_size": 512, 00:14:24.924 "num_blocks": 65536, 00:14:24.924 "uuid": "85288f3b-ac0b-4f1c-8ceb-9864a39a6aca", 00:14:24.924 "assigned_rate_limits": { 00:14:24.924 "rw_ios_per_sec": 0, 00:14:24.924 "rw_mbytes_per_sec": 0, 00:14:24.924 "r_mbytes_per_sec": 0, 00:14:24.924 "w_mbytes_per_sec": 0 00:14:24.924 }, 00:14:24.925 "claimed": true, 00:14:24.925 "claim_type": "exclusive_write", 00:14:24.925 "zoned": false, 00:14:24.925 "supported_io_types": { 00:14:24.925 "read": true, 00:14:24.925 "write": true, 00:14:24.925 "unmap": true, 00:14:24.925 "flush": true, 00:14:24.925 "reset": true, 00:14:24.925 "nvme_admin": false, 00:14:24.925 "nvme_io": false, 00:14:24.925 "nvme_io_md": false, 00:14:24.925 "write_zeroes": true, 00:14:24.925 "zcopy": true, 00:14:24.925 "get_zone_info": false, 00:14:24.925 "zone_management": false, 00:14:24.925 "zone_append": false, 00:14:24.925 "compare": false, 00:14:24.925 "compare_and_write": false, 00:14:24.925 "abort": true, 00:14:24.925 "seek_hole": false, 00:14:24.925 "seek_data": false, 00:14:24.925 "copy": true, 00:14:24.925 "nvme_iov_md": false 00:14:24.925 }, 00:14:24.925 "memory_domains": [ 00:14:24.925 { 00:14:24.925 "dma_device_id": "system", 00:14:24.925 "dma_device_type": 1 00:14:24.925 }, 00:14:24.925 { 00:14:24.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.925 "dma_device_type": 2 00:14:24.925 } 00:14:24.925 ], 00:14:24.925 "driver_specific": {} 00:14:24.925 } 00:14:24.925 ] 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.925 "name": "Existed_Raid", 00:14:24.925 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:24.925 "strip_size_kb": 64, 00:14:24.925 "state": "configuring", 00:14:24.925 "raid_level": "raid0", 00:14:24.925 "superblock": true, 00:14:24.925 "num_base_bdevs": 4, 00:14:24.925 "num_base_bdevs_discovered": 3, 00:14:24.925 "num_base_bdevs_operational": 4, 00:14:24.925 "base_bdevs_list": [ 00:14:24.925 { 00:14:24.925 "name": "BaseBdev1", 00:14:24.925 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:24.925 "is_configured": true, 00:14:24.925 "data_offset": 2048, 00:14:24.925 "data_size": 63488 00:14:24.925 }, 00:14:24.925 { 00:14:24.925 "name": "BaseBdev2", 00:14:24.925 "uuid": "cfc80703-c970-42ac-8b7f-d246da42528d", 00:14:24.925 "is_configured": true, 00:14:24.925 "data_offset": 2048, 00:14:24.925 "data_size": 63488 00:14:24.925 }, 00:14:24.925 { 00:14:24.925 "name": "BaseBdev3", 00:14:24.925 "uuid": "85288f3b-ac0b-4f1c-8ceb-9864a39a6aca", 00:14:24.925 "is_configured": true, 00:14:24.925 "data_offset": 2048, 00:14:24.925 "data_size": 63488 00:14:24.925 }, 00:14:24.925 { 00:14:24.925 "name": "BaseBdev4", 00:14:24.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.925 "is_configured": false, 00:14:24.925 "data_offset": 0, 00:14:24.925 "data_size": 0 00:14:24.925 } 00:14:24.925 ] 00:14:24.925 }' 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.925 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.492 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:25.492 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.492 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.492 [2024-11-27 14:13:55.827669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.492 [2024-11-27 14:13:55.828117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:25.492 [2024-11-27 14:13:55.828139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:25.492 BaseBdev4 00:14:25.492 [2024-11-27 14:13:55.828506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:25.492 [2024-11-27 14:13:55.828711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:25.492 [2024-11-27 14:13:55.828732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:25.493 [2024-11-27 14:13:55.828944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.493 [ 00:14:25.493 { 00:14:25.493 "name": "BaseBdev4", 00:14:25.493 "aliases": [ 00:14:25.493 "3b8fcfdd-085a-4ccf-aaa6-1f2dc9beceeb" 00:14:25.493 ], 00:14:25.493 "product_name": "Malloc disk", 00:14:25.493 "block_size": 512, 00:14:25.493 "num_blocks": 65536, 00:14:25.493 "uuid": "3b8fcfdd-085a-4ccf-aaa6-1f2dc9beceeb", 00:14:25.493 "assigned_rate_limits": { 00:14:25.493 "rw_ios_per_sec": 0, 00:14:25.493 "rw_mbytes_per_sec": 0, 00:14:25.493 "r_mbytes_per_sec": 0, 00:14:25.493 "w_mbytes_per_sec": 0 00:14:25.493 }, 00:14:25.493 "claimed": true, 00:14:25.493 "claim_type": "exclusive_write", 00:14:25.493 "zoned": false, 00:14:25.493 "supported_io_types": { 00:14:25.493 "read": true, 00:14:25.493 "write": true, 00:14:25.493 "unmap": true, 00:14:25.493 "flush": true, 00:14:25.493 "reset": true, 00:14:25.493 "nvme_admin": false, 00:14:25.493 "nvme_io": false, 00:14:25.493 "nvme_io_md": false, 00:14:25.493 "write_zeroes": true, 00:14:25.493 "zcopy": true, 00:14:25.493 "get_zone_info": false, 00:14:25.493 "zone_management": false, 00:14:25.493 "zone_append": false, 00:14:25.493 "compare": false, 00:14:25.493 "compare_and_write": false, 00:14:25.493 "abort": true, 00:14:25.493 "seek_hole": false, 00:14:25.493 "seek_data": false, 00:14:25.493 "copy": true, 00:14:25.493 "nvme_iov_md": false 00:14:25.493 }, 00:14:25.493 "memory_domains": [ 00:14:25.493 { 00:14:25.493 "dma_device_id": "system", 00:14:25.493 "dma_device_type": 1 00:14:25.493 }, 00:14:25.493 { 00:14:25.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.493 "dma_device_type": 2 00:14:25.493 } 00:14:25.493 ], 00:14:25.493 "driver_specific": {} 00:14:25.493 } 00:14:25.493 ] 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.493 "name": "Existed_Raid", 00:14:25.493 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:25.493 "strip_size_kb": 64, 00:14:25.493 "state": "online", 00:14:25.493 "raid_level": "raid0", 00:14:25.493 "superblock": true, 00:14:25.493 "num_base_bdevs": 4, 00:14:25.493 "num_base_bdevs_discovered": 4, 00:14:25.493 "num_base_bdevs_operational": 4, 00:14:25.493 "base_bdevs_list": [ 00:14:25.493 { 00:14:25.493 "name": "BaseBdev1", 00:14:25.493 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:25.493 "is_configured": true, 00:14:25.493 "data_offset": 2048, 00:14:25.493 "data_size": 63488 00:14:25.493 }, 00:14:25.493 { 00:14:25.493 "name": "BaseBdev2", 00:14:25.493 "uuid": "cfc80703-c970-42ac-8b7f-d246da42528d", 00:14:25.493 "is_configured": true, 00:14:25.493 "data_offset": 2048, 00:14:25.493 "data_size": 63488 00:14:25.493 }, 00:14:25.493 { 00:14:25.493 "name": "BaseBdev3", 00:14:25.493 "uuid": "85288f3b-ac0b-4f1c-8ceb-9864a39a6aca", 00:14:25.493 "is_configured": true, 00:14:25.493 "data_offset": 2048, 00:14:25.493 "data_size": 63488 00:14:25.493 }, 00:14:25.493 { 00:14:25.493 "name": "BaseBdev4", 00:14:25.493 "uuid": "3b8fcfdd-085a-4ccf-aaa6-1f2dc9beceeb", 00:14:25.493 "is_configured": true, 00:14:25.493 "data_offset": 2048, 00:14:25.493 "data_size": 63488 00:14:25.493 } 00:14:25.493 ] 00:14:25.493 }' 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.493 14:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.059 [2024-11-27 14:13:56.396406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.059 "name": "Existed_Raid", 00:14:26.059 "aliases": [ 00:14:26.059 "ce77a08b-a897-4d9a-ab28-b070b46c9daa" 00:14:26.059 ], 00:14:26.059 "product_name": "Raid Volume", 00:14:26.059 "block_size": 512, 00:14:26.059 "num_blocks": 253952, 00:14:26.059 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:26.059 "assigned_rate_limits": { 00:14:26.059 "rw_ios_per_sec": 0, 00:14:26.059 "rw_mbytes_per_sec": 0, 00:14:26.059 "r_mbytes_per_sec": 0, 00:14:26.059 "w_mbytes_per_sec": 0 00:14:26.059 }, 00:14:26.059 "claimed": false, 00:14:26.059 "zoned": false, 00:14:26.059 "supported_io_types": { 00:14:26.059 "read": true, 00:14:26.059 "write": true, 00:14:26.059 "unmap": true, 00:14:26.059 "flush": true, 00:14:26.059 "reset": true, 00:14:26.059 "nvme_admin": false, 00:14:26.059 "nvme_io": false, 00:14:26.059 "nvme_io_md": false, 00:14:26.059 "write_zeroes": true, 00:14:26.059 "zcopy": false, 00:14:26.059 "get_zone_info": false, 00:14:26.059 "zone_management": false, 00:14:26.059 "zone_append": false, 00:14:26.059 "compare": false, 00:14:26.059 "compare_and_write": false, 00:14:26.059 "abort": false, 00:14:26.059 "seek_hole": false, 00:14:26.059 "seek_data": false, 00:14:26.059 "copy": false, 00:14:26.059 "nvme_iov_md": false 00:14:26.059 }, 00:14:26.059 "memory_domains": [ 00:14:26.059 { 00:14:26.059 "dma_device_id": "system", 00:14:26.059 "dma_device_type": 1 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.059 "dma_device_type": 2 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "system", 00:14:26.059 "dma_device_type": 1 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.059 "dma_device_type": 2 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "system", 00:14:26.059 "dma_device_type": 1 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.059 "dma_device_type": 2 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "system", 00:14:26.059 "dma_device_type": 1 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.059 "dma_device_type": 2 00:14:26.059 } 00:14:26.059 ], 00:14:26.059 "driver_specific": { 00:14:26.059 "raid": { 00:14:26.059 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:26.059 "strip_size_kb": 64, 00:14:26.059 "state": "online", 00:14:26.059 "raid_level": "raid0", 00:14:26.059 "superblock": true, 00:14:26.059 "num_base_bdevs": 4, 00:14:26.059 "num_base_bdevs_discovered": 4, 00:14:26.059 "num_base_bdevs_operational": 4, 00:14:26.059 "base_bdevs_list": [ 00:14:26.059 { 00:14:26.059 "name": "BaseBdev1", 00:14:26.059 "uuid": "0465c428-8661-4016-87f0-9059769b75b7", 00:14:26.059 "is_configured": true, 00:14:26.059 "data_offset": 2048, 00:14:26.059 "data_size": 63488 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "name": "BaseBdev2", 00:14:26.059 "uuid": "cfc80703-c970-42ac-8b7f-d246da42528d", 00:14:26.059 "is_configured": true, 00:14:26.059 "data_offset": 2048, 00:14:26.059 "data_size": 63488 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "name": "BaseBdev3", 00:14:26.059 "uuid": "85288f3b-ac0b-4f1c-8ceb-9864a39a6aca", 00:14:26.059 "is_configured": true, 00:14:26.059 "data_offset": 2048, 00:14:26.059 "data_size": 63488 00:14:26.059 }, 00:14:26.059 { 00:14:26.059 "name": "BaseBdev4", 00:14:26.059 "uuid": "3b8fcfdd-085a-4ccf-aaa6-1f2dc9beceeb", 00:14:26.059 "is_configured": true, 00:14:26.059 "data_offset": 2048, 00:14:26.059 "data_size": 63488 00:14:26.059 } 00:14:26.059 ] 00:14:26.059 } 00:14:26.059 } 00:14:26.059 }' 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:26.059 BaseBdev2 00:14:26.059 BaseBdev3 00:14:26.059 BaseBdev4' 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.059 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.318 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.318 [2024-11-27 14:13:56.752153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.318 [2024-11-27 14:13:56.752230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.318 [2024-11-27 14:13:56.752309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.576 "name": "Existed_Raid", 00:14:26.576 "uuid": "ce77a08b-a897-4d9a-ab28-b070b46c9daa", 00:14:26.576 "strip_size_kb": 64, 00:14:26.576 "state": "offline", 00:14:26.576 "raid_level": "raid0", 00:14:26.576 "superblock": true, 00:14:26.576 "num_base_bdevs": 4, 00:14:26.576 "num_base_bdevs_discovered": 3, 00:14:26.576 "num_base_bdevs_operational": 3, 00:14:26.576 "base_bdevs_list": [ 00:14:26.576 { 00:14:26.576 "name": null, 00:14:26.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.576 "is_configured": false, 00:14:26.576 "data_offset": 0, 00:14:26.576 "data_size": 63488 00:14:26.576 }, 00:14:26.576 { 00:14:26.576 "name": "BaseBdev2", 00:14:26.576 "uuid": "cfc80703-c970-42ac-8b7f-d246da42528d", 00:14:26.576 "is_configured": true, 00:14:26.576 "data_offset": 2048, 00:14:26.576 "data_size": 63488 00:14:26.576 }, 00:14:26.576 { 00:14:26.576 "name": "BaseBdev3", 00:14:26.576 "uuid": "85288f3b-ac0b-4f1c-8ceb-9864a39a6aca", 00:14:26.576 "is_configured": true, 00:14:26.576 "data_offset": 2048, 00:14:26.576 "data_size": 63488 00:14:26.576 }, 00:14:26.576 { 00:14:26.576 "name": "BaseBdev4", 00:14:26.576 "uuid": "3b8fcfdd-085a-4ccf-aaa6-1f2dc9beceeb", 00:14:26.576 "is_configured": true, 00:14:26.576 "data_offset": 2048, 00:14:26.576 "data_size": 63488 00:14:26.576 } 00:14:26.576 ] 00:14:26.576 }' 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.576 14:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 [2024-11-27 14:13:57.410290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.143 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.143 [2024-11-27 14:13:57.562566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.401 [2024-11-27 14:13:57.738441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:27.401 [2024-11-27 14:13:57.738545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.401 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.402 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 BaseBdev2 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 [ 00:14:27.661 { 00:14:27.661 "name": "BaseBdev2", 00:14:27.661 "aliases": [ 00:14:27.661 "5cc714ab-679a-4dc5-9035-21a91618e028" 00:14:27.661 ], 00:14:27.661 "product_name": "Malloc disk", 00:14:27.661 "block_size": 512, 00:14:27.661 "num_blocks": 65536, 00:14:27.661 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:27.661 "assigned_rate_limits": { 00:14:27.661 "rw_ios_per_sec": 0, 00:14:27.661 "rw_mbytes_per_sec": 0, 00:14:27.661 "r_mbytes_per_sec": 0, 00:14:27.661 "w_mbytes_per_sec": 0 00:14:27.661 }, 00:14:27.661 "claimed": false, 00:14:27.661 "zoned": false, 00:14:27.661 "supported_io_types": { 00:14:27.661 "read": true, 00:14:27.661 "write": true, 00:14:27.661 "unmap": true, 00:14:27.661 "flush": true, 00:14:27.661 "reset": true, 00:14:27.661 "nvme_admin": false, 00:14:27.661 "nvme_io": false, 00:14:27.661 "nvme_io_md": false, 00:14:27.661 "write_zeroes": true, 00:14:27.661 "zcopy": true, 00:14:27.661 "get_zone_info": false, 00:14:27.661 "zone_management": false, 00:14:27.661 "zone_append": false, 00:14:27.661 "compare": false, 00:14:27.661 "compare_and_write": false, 00:14:27.661 "abort": true, 00:14:27.661 "seek_hole": false, 00:14:27.661 "seek_data": false, 00:14:27.661 "copy": true, 00:14:27.661 "nvme_iov_md": false 00:14:27.661 }, 00:14:27.661 "memory_domains": [ 00:14:27.661 { 00:14:27.661 "dma_device_id": "system", 00:14:27.661 "dma_device_type": 1 00:14:27.661 }, 00:14:27.661 { 00:14:27.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.661 "dma_device_type": 2 00:14:27.661 } 00:14:27.661 ], 00:14:27.661 "driver_specific": {} 00:14:27.661 } 00:14:27.661 ] 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.661 14:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 BaseBdev3 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.661 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.661 [ 00:14:27.661 { 00:14:27.661 "name": "BaseBdev3", 00:14:27.661 "aliases": [ 00:14:27.661 "02e21397-4a87-460b-9ba2-04277b4e32cb" 00:14:27.661 ], 00:14:27.661 "product_name": "Malloc disk", 00:14:27.661 "block_size": 512, 00:14:27.661 "num_blocks": 65536, 00:14:27.661 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:27.661 "assigned_rate_limits": { 00:14:27.661 "rw_ios_per_sec": 0, 00:14:27.661 "rw_mbytes_per_sec": 0, 00:14:27.661 "r_mbytes_per_sec": 0, 00:14:27.661 "w_mbytes_per_sec": 0 00:14:27.661 }, 00:14:27.661 "claimed": false, 00:14:27.661 "zoned": false, 00:14:27.661 "supported_io_types": { 00:14:27.661 "read": true, 00:14:27.661 "write": true, 00:14:27.661 "unmap": true, 00:14:27.661 "flush": true, 00:14:27.661 "reset": true, 00:14:27.662 "nvme_admin": false, 00:14:27.662 "nvme_io": false, 00:14:27.662 "nvme_io_md": false, 00:14:27.662 "write_zeroes": true, 00:14:27.662 "zcopy": true, 00:14:27.662 "get_zone_info": false, 00:14:27.662 "zone_management": false, 00:14:27.662 "zone_append": false, 00:14:27.662 "compare": false, 00:14:27.662 "compare_and_write": false, 00:14:27.662 "abort": true, 00:14:27.662 "seek_hole": false, 00:14:27.662 "seek_data": false, 00:14:27.662 "copy": true, 00:14:27.662 "nvme_iov_md": false 00:14:27.662 }, 00:14:27.662 "memory_domains": [ 00:14:27.662 { 00:14:27.662 "dma_device_id": "system", 00:14:27.662 "dma_device_type": 1 00:14:27.662 }, 00:14:27.662 { 00:14:27.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.662 "dma_device_type": 2 00:14:27.662 } 00:14:27.662 ], 00:14:27.662 "driver_specific": {} 00:14:27.662 } 00:14:27.662 ] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.662 BaseBdev4 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.662 [ 00:14:27.662 { 00:14:27.662 "name": "BaseBdev4", 00:14:27.662 "aliases": [ 00:14:27.662 "6382d78a-ff88-42fd-a851-4acde5cd3f2a" 00:14:27.662 ], 00:14:27.662 "product_name": "Malloc disk", 00:14:27.662 "block_size": 512, 00:14:27.662 "num_blocks": 65536, 00:14:27.662 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:27.662 "assigned_rate_limits": { 00:14:27.662 "rw_ios_per_sec": 0, 00:14:27.662 "rw_mbytes_per_sec": 0, 00:14:27.662 "r_mbytes_per_sec": 0, 00:14:27.662 "w_mbytes_per_sec": 0 00:14:27.662 }, 00:14:27.662 "claimed": false, 00:14:27.662 "zoned": false, 00:14:27.662 "supported_io_types": { 00:14:27.662 "read": true, 00:14:27.662 "write": true, 00:14:27.662 "unmap": true, 00:14:27.662 "flush": true, 00:14:27.662 "reset": true, 00:14:27.662 "nvme_admin": false, 00:14:27.662 "nvme_io": false, 00:14:27.662 "nvme_io_md": false, 00:14:27.662 "write_zeroes": true, 00:14:27.662 "zcopy": true, 00:14:27.662 "get_zone_info": false, 00:14:27.662 "zone_management": false, 00:14:27.662 "zone_append": false, 00:14:27.662 "compare": false, 00:14:27.662 "compare_and_write": false, 00:14:27.662 "abort": true, 00:14:27.662 "seek_hole": false, 00:14:27.662 "seek_data": false, 00:14:27.662 "copy": true, 00:14:27.662 "nvme_iov_md": false 00:14:27.662 }, 00:14:27.662 "memory_domains": [ 00:14:27.662 { 00:14:27.662 "dma_device_id": "system", 00:14:27.662 "dma_device_type": 1 00:14:27.662 }, 00:14:27.662 { 00:14:27.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.662 "dma_device_type": 2 00:14:27.662 } 00:14:27.662 ], 00:14:27.662 "driver_specific": {} 00:14:27.662 } 00:14:27.662 ] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.662 [2024-11-27 14:13:58.133593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.662 [2024-11-27 14:13:58.134008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.662 [2024-11-27 14:13:58.134073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.662 [2024-11-27 14:13:58.136937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.662 [2024-11-27 14:13:58.137013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.662 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.921 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.921 "name": "Existed_Raid", 00:14:27.921 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:27.921 "strip_size_kb": 64, 00:14:27.921 "state": "configuring", 00:14:27.921 "raid_level": "raid0", 00:14:27.921 "superblock": true, 00:14:27.921 "num_base_bdevs": 4, 00:14:27.921 "num_base_bdevs_discovered": 3, 00:14:27.921 "num_base_bdevs_operational": 4, 00:14:27.921 "base_bdevs_list": [ 00:14:27.921 { 00:14:27.921 "name": "BaseBdev1", 00:14:27.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.921 "is_configured": false, 00:14:27.921 "data_offset": 0, 00:14:27.921 "data_size": 0 00:14:27.921 }, 00:14:27.921 { 00:14:27.921 "name": "BaseBdev2", 00:14:27.921 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:27.921 "is_configured": true, 00:14:27.921 "data_offset": 2048, 00:14:27.921 "data_size": 63488 00:14:27.921 }, 00:14:27.921 { 00:14:27.921 "name": "BaseBdev3", 00:14:27.921 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:27.921 "is_configured": true, 00:14:27.921 "data_offset": 2048, 00:14:27.921 "data_size": 63488 00:14:27.921 }, 00:14:27.921 { 00:14:27.921 "name": "BaseBdev4", 00:14:27.921 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:27.921 "is_configured": true, 00:14:27.921 "data_offset": 2048, 00:14:27.921 "data_size": 63488 00:14:27.921 } 00:14:27.921 ] 00:14:27.921 }' 00:14:27.921 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.921 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.179 [2024-11-27 14:13:58.645625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.179 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.485 "name": "Existed_Raid", 00:14:28.485 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:28.485 "strip_size_kb": 64, 00:14:28.485 "state": "configuring", 00:14:28.485 "raid_level": "raid0", 00:14:28.485 "superblock": true, 00:14:28.485 "num_base_bdevs": 4, 00:14:28.485 "num_base_bdevs_discovered": 2, 00:14:28.485 "num_base_bdevs_operational": 4, 00:14:28.485 "base_bdevs_list": [ 00:14:28.485 { 00:14:28.485 "name": "BaseBdev1", 00:14:28.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.485 "is_configured": false, 00:14:28.485 "data_offset": 0, 00:14:28.485 "data_size": 0 00:14:28.485 }, 00:14:28.485 { 00:14:28.485 "name": null, 00:14:28.485 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:28.485 "is_configured": false, 00:14:28.485 "data_offset": 0, 00:14:28.485 "data_size": 63488 00:14:28.485 }, 00:14:28.485 { 00:14:28.485 "name": "BaseBdev3", 00:14:28.485 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:28.485 "is_configured": true, 00:14:28.485 "data_offset": 2048, 00:14:28.485 "data_size": 63488 00:14:28.485 }, 00:14:28.485 { 00:14:28.485 "name": "BaseBdev4", 00:14:28.485 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:28.485 "is_configured": true, 00:14:28.485 "data_offset": 2048, 00:14:28.485 "data_size": 63488 00:14:28.485 } 00:14:28.485 ] 00:14:28.485 }' 00:14:28.485 14:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.485 14:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.744 [2024-11-27 14:13:59.237756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.744 BaseBdev1 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.744 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.012 [ 00:14:29.012 { 00:14:29.012 "name": "BaseBdev1", 00:14:29.012 "aliases": [ 00:14:29.012 "3afb0ae8-83f1-4c7c-8353-c6fefa251944" 00:14:29.012 ], 00:14:29.012 "product_name": "Malloc disk", 00:14:29.012 "block_size": 512, 00:14:29.012 "num_blocks": 65536, 00:14:29.012 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:29.012 "assigned_rate_limits": { 00:14:29.012 "rw_ios_per_sec": 0, 00:14:29.012 "rw_mbytes_per_sec": 0, 00:14:29.012 "r_mbytes_per_sec": 0, 00:14:29.012 "w_mbytes_per_sec": 0 00:14:29.012 }, 00:14:29.012 "claimed": true, 00:14:29.012 "claim_type": "exclusive_write", 00:14:29.012 "zoned": false, 00:14:29.012 "supported_io_types": { 00:14:29.012 "read": true, 00:14:29.012 "write": true, 00:14:29.012 "unmap": true, 00:14:29.012 "flush": true, 00:14:29.012 "reset": true, 00:14:29.012 "nvme_admin": false, 00:14:29.012 "nvme_io": false, 00:14:29.012 "nvme_io_md": false, 00:14:29.012 "write_zeroes": true, 00:14:29.012 "zcopy": true, 00:14:29.012 "get_zone_info": false, 00:14:29.012 "zone_management": false, 00:14:29.012 "zone_append": false, 00:14:29.012 "compare": false, 00:14:29.012 "compare_and_write": false, 00:14:29.012 "abort": true, 00:14:29.012 "seek_hole": false, 00:14:29.012 "seek_data": false, 00:14:29.012 "copy": true, 00:14:29.012 "nvme_iov_md": false 00:14:29.012 }, 00:14:29.012 "memory_domains": [ 00:14:29.012 { 00:14:29.012 "dma_device_id": "system", 00:14:29.012 "dma_device_type": 1 00:14:29.012 }, 00:14:29.012 { 00:14:29.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.012 "dma_device_type": 2 00:14:29.012 } 00:14:29.012 ], 00:14:29.012 "driver_specific": {} 00:14:29.012 } 00:14:29.012 ] 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.012 "name": "Existed_Raid", 00:14:29.012 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:29.012 "strip_size_kb": 64, 00:14:29.012 "state": "configuring", 00:14:29.012 "raid_level": "raid0", 00:14:29.012 "superblock": true, 00:14:29.012 "num_base_bdevs": 4, 00:14:29.012 "num_base_bdevs_discovered": 3, 00:14:29.012 "num_base_bdevs_operational": 4, 00:14:29.012 "base_bdevs_list": [ 00:14:29.012 { 00:14:29.012 "name": "BaseBdev1", 00:14:29.012 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:29.012 "is_configured": true, 00:14:29.012 "data_offset": 2048, 00:14:29.012 "data_size": 63488 00:14:29.012 }, 00:14:29.012 { 00:14:29.012 "name": null, 00:14:29.012 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:29.012 "is_configured": false, 00:14:29.012 "data_offset": 0, 00:14:29.012 "data_size": 63488 00:14:29.012 }, 00:14:29.012 { 00:14:29.012 "name": "BaseBdev3", 00:14:29.012 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:29.012 "is_configured": true, 00:14:29.012 "data_offset": 2048, 00:14:29.012 "data_size": 63488 00:14:29.012 }, 00:14:29.012 { 00:14:29.012 "name": "BaseBdev4", 00:14:29.012 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:29.012 "is_configured": true, 00:14:29.012 "data_offset": 2048, 00:14:29.012 "data_size": 63488 00:14:29.012 } 00:14:29.012 ] 00:14:29.012 }' 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.012 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.578 [2024-11-27 14:13:59.854156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.578 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.578 "name": "Existed_Raid", 00:14:29.578 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:29.578 "strip_size_kb": 64, 00:14:29.578 "state": "configuring", 00:14:29.578 "raid_level": "raid0", 00:14:29.578 "superblock": true, 00:14:29.578 "num_base_bdevs": 4, 00:14:29.578 "num_base_bdevs_discovered": 2, 00:14:29.578 "num_base_bdevs_operational": 4, 00:14:29.578 "base_bdevs_list": [ 00:14:29.578 { 00:14:29.579 "name": "BaseBdev1", 00:14:29.579 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:29.579 "is_configured": true, 00:14:29.579 "data_offset": 2048, 00:14:29.579 "data_size": 63488 00:14:29.579 }, 00:14:29.579 { 00:14:29.579 "name": null, 00:14:29.579 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:29.579 "is_configured": false, 00:14:29.579 "data_offset": 0, 00:14:29.579 "data_size": 63488 00:14:29.579 }, 00:14:29.579 { 00:14:29.579 "name": null, 00:14:29.579 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:29.579 "is_configured": false, 00:14:29.579 "data_offset": 0, 00:14:29.579 "data_size": 63488 00:14:29.579 }, 00:14:29.579 { 00:14:29.579 "name": "BaseBdev4", 00:14:29.579 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:29.579 "is_configured": true, 00:14:29.579 "data_offset": 2048, 00:14:29.579 "data_size": 63488 00:14:29.579 } 00:14:29.579 ] 00:14:29.579 }' 00:14:29.579 14:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.579 14:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.145 [2024-11-27 14:14:00.430208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.145 "name": "Existed_Raid", 00:14:30.145 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:30.145 "strip_size_kb": 64, 00:14:30.145 "state": "configuring", 00:14:30.145 "raid_level": "raid0", 00:14:30.145 "superblock": true, 00:14:30.145 "num_base_bdevs": 4, 00:14:30.145 "num_base_bdevs_discovered": 3, 00:14:30.145 "num_base_bdevs_operational": 4, 00:14:30.145 "base_bdevs_list": [ 00:14:30.145 { 00:14:30.145 "name": "BaseBdev1", 00:14:30.145 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:30.145 "is_configured": true, 00:14:30.145 "data_offset": 2048, 00:14:30.145 "data_size": 63488 00:14:30.145 }, 00:14:30.145 { 00:14:30.145 "name": null, 00:14:30.145 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:30.145 "is_configured": false, 00:14:30.145 "data_offset": 0, 00:14:30.145 "data_size": 63488 00:14:30.145 }, 00:14:30.145 { 00:14:30.145 "name": "BaseBdev3", 00:14:30.145 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:30.145 "is_configured": true, 00:14:30.145 "data_offset": 2048, 00:14:30.145 "data_size": 63488 00:14:30.145 }, 00:14:30.145 { 00:14:30.145 "name": "BaseBdev4", 00:14:30.145 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:30.145 "is_configured": true, 00:14:30.145 "data_offset": 2048, 00:14:30.145 "data_size": 63488 00:14:30.145 } 00:14:30.145 ] 00:14:30.145 }' 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.145 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.715 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:30.715 14:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.715 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.715 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.715 14:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.715 [2024-11-27 14:14:01.018458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.715 "name": "Existed_Raid", 00:14:30.715 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:30.715 "strip_size_kb": 64, 00:14:30.715 "state": "configuring", 00:14:30.715 "raid_level": "raid0", 00:14:30.715 "superblock": true, 00:14:30.715 "num_base_bdevs": 4, 00:14:30.715 "num_base_bdevs_discovered": 2, 00:14:30.715 "num_base_bdevs_operational": 4, 00:14:30.715 "base_bdevs_list": [ 00:14:30.715 { 00:14:30.715 "name": null, 00:14:30.715 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:30.715 "is_configured": false, 00:14:30.715 "data_offset": 0, 00:14:30.715 "data_size": 63488 00:14:30.715 }, 00:14:30.715 { 00:14:30.715 "name": null, 00:14:30.715 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:30.715 "is_configured": false, 00:14:30.715 "data_offset": 0, 00:14:30.715 "data_size": 63488 00:14:30.715 }, 00:14:30.715 { 00:14:30.715 "name": "BaseBdev3", 00:14:30.715 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:30.715 "is_configured": true, 00:14:30.715 "data_offset": 2048, 00:14:30.715 "data_size": 63488 00:14:30.715 }, 00:14:30.715 { 00:14:30.715 "name": "BaseBdev4", 00:14:30.715 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:30.715 "is_configured": true, 00:14:30.715 "data_offset": 2048, 00:14:30.715 "data_size": 63488 00:14:30.715 } 00:14:30.715 ] 00:14:30.715 }' 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.715 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.283 [2024-11-27 14:14:01.673906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.283 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.284 "name": "Existed_Raid", 00:14:31.284 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:31.284 "strip_size_kb": 64, 00:14:31.284 "state": "configuring", 00:14:31.284 "raid_level": "raid0", 00:14:31.284 "superblock": true, 00:14:31.284 "num_base_bdevs": 4, 00:14:31.284 "num_base_bdevs_discovered": 3, 00:14:31.284 "num_base_bdevs_operational": 4, 00:14:31.284 "base_bdevs_list": [ 00:14:31.284 { 00:14:31.284 "name": null, 00:14:31.284 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:31.284 "is_configured": false, 00:14:31.284 "data_offset": 0, 00:14:31.284 "data_size": 63488 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "name": "BaseBdev2", 00:14:31.284 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:31.284 "is_configured": true, 00:14:31.284 "data_offset": 2048, 00:14:31.284 "data_size": 63488 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "name": "BaseBdev3", 00:14:31.284 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:31.284 "is_configured": true, 00:14:31.284 "data_offset": 2048, 00:14:31.284 "data_size": 63488 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "name": "BaseBdev4", 00:14:31.284 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:31.284 "is_configured": true, 00:14:31.284 "data_offset": 2048, 00:14:31.284 "data_size": 63488 00:14:31.284 } 00:14:31.284 ] 00:14:31.284 }' 00:14:31.284 14:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.284 14:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3afb0ae8-83f1-4c7c-8353-c6fefa251944 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 [2024-11-27 14:14:02.327367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:31.852 [2024-11-27 14:14:02.327643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:31.852 [2024-11-27 14:14:02.327661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:31.852 [2024-11-27 14:14:02.328017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:31.852 NewBaseBdev 00:14:31.852 [2024-11-27 14:14:02.328205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:31.852 [2024-11-27 14:14:02.328225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:31.852 [2024-11-27 14:14:02.328393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 [ 00:14:31.852 { 00:14:31.852 "name": "NewBaseBdev", 00:14:31.852 "aliases": [ 00:14:31.852 "3afb0ae8-83f1-4c7c-8353-c6fefa251944" 00:14:31.852 ], 00:14:31.852 "product_name": "Malloc disk", 00:14:31.852 "block_size": 512, 00:14:31.852 "num_blocks": 65536, 00:14:31.852 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:31.852 "assigned_rate_limits": { 00:14:31.852 "rw_ios_per_sec": 0, 00:14:31.852 "rw_mbytes_per_sec": 0, 00:14:31.852 "r_mbytes_per_sec": 0, 00:14:31.852 "w_mbytes_per_sec": 0 00:14:31.852 }, 00:14:31.852 "claimed": true, 00:14:31.852 "claim_type": "exclusive_write", 00:14:31.852 "zoned": false, 00:14:31.852 "supported_io_types": { 00:14:31.852 "read": true, 00:14:31.852 "write": true, 00:14:31.852 "unmap": true, 00:14:31.852 "flush": true, 00:14:31.852 "reset": true, 00:14:31.852 "nvme_admin": false, 00:14:31.852 "nvme_io": false, 00:14:31.852 "nvme_io_md": false, 00:14:31.852 "write_zeroes": true, 00:14:31.852 "zcopy": true, 00:14:31.852 "get_zone_info": false, 00:14:31.852 "zone_management": false, 00:14:31.852 "zone_append": false, 00:14:31.852 "compare": false, 00:14:31.852 "compare_and_write": false, 00:14:31.852 "abort": true, 00:14:31.852 "seek_hole": false, 00:14:31.852 "seek_data": false, 00:14:31.852 "copy": true, 00:14:31.852 "nvme_iov_md": false 00:14:31.852 }, 00:14:31.852 "memory_domains": [ 00:14:31.852 { 00:14:31.852 "dma_device_id": "system", 00:14:31.852 "dma_device_type": 1 00:14:31.852 }, 00:14:31.852 { 00:14:31.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.852 "dma_device_type": 2 00:14:31.852 } 00:14:31.852 ], 00:14:31.852 "driver_specific": {} 00:14:31.852 } 00:14:31.852 ] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.852 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.111 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.111 "name": "Existed_Raid", 00:14:32.111 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:32.111 "strip_size_kb": 64, 00:14:32.111 "state": "online", 00:14:32.111 "raid_level": "raid0", 00:14:32.111 "superblock": true, 00:14:32.111 "num_base_bdevs": 4, 00:14:32.111 "num_base_bdevs_discovered": 4, 00:14:32.111 "num_base_bdevs_operational": 4, 00:14:32.111 "base_bdevs_list": [ 00:14:32.111 { 00:14:32.111 "name": "NewBaseBdev", 00:14:32.111 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:32.111 "is_configured": true, 00:14:32.111 "data_offset": 2048, 00:14:32.111 "data_size": 63488 00:14:32.111 }, 00:14:32.111 { 00:14:32.111 "name": "BaseBdev2", 00:14:32.111 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:32.111 "is_configured": true, 00:14:32.111 "data_offset": 2048, 00:14:32.111 "data_size": 63488 00:14:32.112 }, 00:14:32.112 { 00:14:32.112 "name": "BaseBdev3", 00:14:32.112 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:32.112 "is_configured": true, 00:14:32.112 "data_offset": 2048, 00:14:32.112 "data_size": 63488 00:14:32.112 }, 00:14:32.112 { 00:14:32.112 "name": "BaseBdev4", 00:14:32.112 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:32.112 "is_configured": true, 00:14:32.112 "data_offset": 2048, 00:14:32.112 "data_size": 63488 00:14:32.112 } 00:14:32.112 ] 00:14:32.112 }' 00:14:32.112 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.112 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:32.370 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.629 [2024-11-27 14:14:02.884097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.629 14:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.629 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:32.629 "name": "Existed_Raid", 00:14:32.629 "aliases": [ 00:14:32.629 "11b51742-d765-4083-b1f1-54ba653fb1d1" 00:14:32.629 ], 00:14:32.629 "product_name": "Raid Volume", 00:14:32.629 "block_size": 512, 00:14:32.629 "num_blocks": 253952, 00:14:32.629 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:32.629 "assigned_rate_limits": { 00:14:32.629 "rw_ios_per_sec": 0, 00:14:32.629 "rw_mbytes_per_sec": 0, 00:14:32.629 "r_mbytes_per_sec": 0, 00:14:32.629 "w_mbytes_per_sec": 0 00:14:32.629 }, 00:14:32.629 "claimed": false, 00:14:32.629 "zoned": false, 00:14:32.629 "supported_io_types": { 00:14:32.629 "read": true, 00:14:32.629 "write": true, 00:14:32.629 "unmap": true, 00:14:32.629 "flush": true, 00:14:32.629 "reset": true, 00:14:32.629 "nvme_admin": false, 00:14:32.629 "nvme_io": false, 00:14:32.629 "nvme_io_md": false, 00:14:32.629 "write_zeroes": true, 00:14:32.629 "zcopy": false, 00:14:32.629 "get_zone_info": false, 00:14:32.629 "zone_management": false, 00:14:32.629 "zone_append": false, 00:14:32.629 "compare": false, 00:14:32.629 "compare_and_write": false, 00:14:32.629 "abort": false, 00:14:32.629 "seek_hole": false, 00:14:32.629 "seek_data": false, 00:14:32.629 "copy": false, 00:14:32.629 "nvme_iov_md": false 00:14:32.629 }, 00:14:32.629 "memory_domains": [ 00:14:32.629 { 00:14:32.629 "dma_device_id": "system", 00:14:32.629 "dma_device_type": 1 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.629 "dma_device_type": 2 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "system", 00:14:32.629 "dma_device_type": 1 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.629 "dma_device_type": 2 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "system", 00:14:32.629 "dma_device_type": 1 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.629 "dma_device_type": 2 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "system", 00:14:32.629 "dma_device_type": 1 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.629 "dma_device_type": 2 00:14:32.629 } 00:14:32.629 ], 00:14:32.629 "driver_specific": { 00:14:32.629 "raid": { 00:14:32.629 "uuid": "11b51742-d765-4083-b1f1-54ba653fb1d1", 00:14:32.629 "strip_size_kb": 64, 00:14:32.629 "state": "online", 00:14:32.629 "raid_level": "raid0", 00:14:32.629 "superblock": true, 00:14:32.629 "num_base_bdevs": 4, 00:14:32.629 "num_base_bdevs_discovered": 4, 00:14:32.629 "num_base_bdevs_operational": 4, 00:14:32.629 "base_bdevs_list": [ 00:14:32.629 { 00:14:32.629 "name": "NewBaseBdev", 00:14:32.629 "uuid": "3afb0ae8-83f1-4c7c-8353-c6fefa251944", 00:14:32.629 "is_configured": true, 00:14:32.629 "data_offset": 2048, 00:14:32.629 "data_size": 63488 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "name": "BaseBdev2", 00:14:32.629 "uuid": "5cc714ab-679a-4dc5-9035-21a91618e028", 00:14:32.629 "is_configured": true, 00:14:32.629 "data_offset": 2048, 00:14:32.629 "data_size": 63488 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "name": "BaseBdev3", 00:14:32.629 "uuid": "02e21397-4a87-460b-9ba2-04277b4e32cb", 00:14:32.629 "is_configured": true, 00:14:32.629 "data_offset": 2048, 00:14:32.629 "data_size": 63488 00:14:32.629 }, 00:14:32.629 { 00:14:32.629 "name": "BaseBdev4", 00:14:32.629 "uuid": "6382d78a-ff88-42fd-a851-4acde5cd3f2a", 00:14:32.629 "is_configured": true, 00:14:32.629 "data_offset": 2048, 00:14:32.629 "data_size": 63488 00:14:32.629 } 00:14:32.629 ] 00:14:32.629 } 00:14:32.629 } 00:14:32.629 }' 00:14:32.629 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.629 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:32.629 BaseBdev2 00:14:32.629 BaseBdev3 00:14:32.629 BaseBdev4' 00:14:32.629 14:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.629 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.629 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.629 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.630 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.889 [2024-11-27 14:14:03.251685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.889 [2024-11-27 14:14:03.251721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.889 [2024-11-27 14:14:03.251826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.889 [2024-11-27 14:14:03.251934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.889 [2024-11-27 14:14:03.251954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70331 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70331 ']' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70331 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70331 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70331' 00:14:32.889 killing process with pid 70331 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70331 00:14:32.889 [2024-11-27 14:14:03.292321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.889 14:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70331 00:14:33.149 [2024-11-27 14:14:03.623952] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.525 14:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:34.525 ************************************ 00:14:34.525 END TEST raid_state_function_test_sb 00:14:34.525 ************************************ 00:14:34.525 00:14:34.525 real 0m12.842s 00:14:34.525 user 0m21.257s 00:14:34.525 sys 0m1.856s 00:14:34.525 14:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.525 14:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.525 14:14:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:34.525 14:14:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:34.525 14:14:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.525 14:14:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.525 ************************************ 00:14:34.525 START TEST raid_superblock_test 00:14:34.525 ************************************ 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71016 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71016 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71016 ']' 00:14:34.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.525 14:14:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.525 [2024-11-27 14:14:04.823909] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:34.525 [2024-11-27 14:14:04.824374] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71016 ] 00:14:34.525 [2024-11-27 14:14:05.008044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.784 [2024-11-27 14:14:05.136306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.043 [2024-11-27 14:14:05.330345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.043 [2024-11-27 14:14:05.330448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.301 malloc1 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.301 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 [2024-11-27 14:14:05.818352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.560 [2024-11-27 14:14:05.818610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.560 [2024-11-27 14:14:05.818688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:35.560 [2024-11-27 14:14:05.818921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.560 [2024-11-27 14:14:05.821840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.560 [2024-11-27 14:14:05.822026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.560 pt1 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 malloc2 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 [2024-11-27 14:14:05.874404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:35.560 [2024-11-27 14:14:05.874480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.560 [2024-11-27 14:14:05.874518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:35.560 [2024-11-27 14:14:05.874533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.560 [2024-11-27 14:14:05.877548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.560 [2024-11-27 14:14:05.877588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:35.560 pt2 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 malloc3 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 [2024-11-27 14:14:05.943707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:35.560 [2024-11-27 14:14:05.943943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.560 [2024-11-27 14:14:05.944021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:35.560 [2024-11-27 14:14:05.944185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.560 [2024-11-27 14:14:05.947140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.560 [2024-11-27 14:14:05.947294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:35.560 pt3 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 malloc4 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.560 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.560 [2024-11-27 14:14:06.002231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:35.560 [2024-11-27 14:14:06.002429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.560 [2024-11-27 14:14:06.002510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:35.560 [2024-11-27 14:14:06.002663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.560 [2024-11-27 14:14:06.005561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.560 [2024-11-27 14:14:06.005758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:35.560 pt4 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.561 [2024-11-27 14:14:06.014431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:35.561 [2024-11-27 14:14:06.017059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:35.561 [2024-11-27 14:14:06.017200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:35.561 [2024-11-27 14:14:06.017268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:35.561 [2024-11-27 14:14:06.017518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:35.561 [2024-11-27 14:14:06.017550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:35.561 [2024-11-27 14:14:06.017884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:35.561 [2024-11-27 14:14:06.018172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:35.561 [2024-11-27 14:14:06.018194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:35.561 [2024-11-27 14:14:06.018421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.561 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.820 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.820 "name": "raid_bdev1", 00:14:35.820 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:35.820 "strip_size_kb": 64, 00:14:35.820 "state": "online", 00:14:35.820 "raid_level": "raid0", 00:14:35.820 "superblock": true, 00:14:35.820 "num_base_bdevs": 4, 00:14:35.820 "num_base_bdevs_discovered": 4, 00:14:35.820 "num_base_bdevs_operational": 4, 00:14:35.820 "base_bdevs_list": [ 00:14:35.820 { 00:14:35.820 "name": "pt1", 00:14:35.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.820 "is_configured": true, 00:14:35.820 "data_offset": 2048, 00:14:35.820 "data_size": 63488 00:14:35.820 }, 00:14:35.820 { 00:14:35.820 "name": "pt2", 00:14:35.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.820 "is_configured": true, 00:14:35.820 "data_offset": 2048, 00:14:35.820 "data_size": 63488 00:14:35.820 }, 00:14:35.820 { 00:14:35.820 "name": "pt3", 00:14:35.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.820 "is_configured": true, 00:14:35.820 "data_offset": 2048, 00:14:35.820 "data_size": 63488 00:14:35.820 }, 00:14:35.820 { 00:14:35.820 "name": "pt4", 00:14:35.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.820 "is_configured": true, 00:14:35.820 "data_offset": 2048, 00:14:35.820 "data_size": 63488 00:14:35.820 } 00:14:35.820 ] 00:14:35.820 }' 00:14:35.820 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.820 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.079 [2024-11-27 14:14:06.543389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.079 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.079 "name": "raid_bdev1", 00:14:36.079 "aliases": [ 00:14:36.079 "b75b2862-16f2-4e13-812c-7a97529e6467" 00:14:36.079 ], 00:14:36.079 "product_name": "Raid Volume", 00:14:36.079 "block_size": 512, 00:14:36.079 "num_blocks": 253952, 00:14:36.079 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:36.079 "assigned_rate_limits": { 00:14:36.079 "rw_ios_per_sec": 0, 00:14:36.079 "rw_mbytes_per_sec": 0, 00:14:36.079 "r_mbytes_per_sec": 0, 00:14:36.079 "w_mbytes_per_sec": 0 00:14:36.079 }, 00:14:36.079 "claimed": false, 00:14:36.079 "zoned": false, 00:14:36.079 "supported_io_types": { 00:14:36.079 "read": true, 00:14:36.079 "write": true, 00:14:36.079 "unmap": true, 00:14:36.079 "flush": true, 00:14:36.079 "reset": true, 00:14:36.079 "nvme_admin": false, 00:14:36.079 "nvme_io": false, 00:14:36.079 "nvme_io_md": false, 00:14:36.079 "write_zeroes": true, 00:14:36.079 "zcopy": false, 00:14:36.079 "get_zone_info": false, 00:14:36.079 "zone_management": false, 00:14:36.079 "zone_append": false, 00:14:36.079 "compare": false, 00:14:36.079 "compare_and_write": false, 00:14:36.079 "abort": false, 00:14:36.079 "seek_hole": false, 00:14:36.079 "seek_data": false, 00:14:36.079 "copy": false, 00:14:36.079 "nvme_iov_md": false 00:14:36.079 }, 00:14:36.079 "memory_domains": [ 00:14:36.079 { 00:14:36.079 "dma_device_id": "system", 00:14:36.079 "dma_device_type": 1 00:14:36.079 }, 00:14:36.079 { 00:14:36.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.080 "dma_device_type": 2 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "dma_device_id": "system", 00:14:36.080 "dma_device_type": 1 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.080 "dma_device_type": 2 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "dma_device_id": "system", 00:14:36.080 "dma_device_type": 1 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.080 "dma_device_type": 2 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "dma_device_id": "system", 00:14:36.080 "dma_device_type": 1 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.080 "dma_device_type": 2 00:14:36.080 } 00:14:36.080 ], 00:14:36.080 "driver_specific": { 00:14:36.080 "raid": { 00:14:36.080 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:36.080 "strip_size_kb": 64, 00:14:36.080 "state": "online", 00:14:36.080 "raid_level": "raid0", 00:14:36.080 "superblock": true, 00:14:36.080 "num_base_bdevs": 4, 00:14:36.080 "num_base_bdevs_discovered": 4, 00:14:36.080 "num_base_bdevs_operational": 4, 00:14:36.080 "base_bdevs_list": [ 00:14:36.080 { 00:14:36.080 "name": "pt1", 00:14:36.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.080 "is_configured": true, 00:14:36.080 "data_offset": 2048, 00:14:36.080 "data_size": 63488 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "name": "pt2", 00:14:36.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.080 "is_configured": true, 00:14:36.080 "data_offset": 2048, 00:14:36.080 "data_size": 63488 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "name": "pt3", 00:14:36.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.080 "is_configured": true, 00:14:36.080 "data_offset": 2048, 00:14:36.080 "data_size": 63488 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "name": "pt4", 00:14:36.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.080 "is_configured": true, 00:14:36.080 "data_offset": 2048, 00:14:36.080 "data_size": 63488 00:14:36.080 } 00:14:36.080 ] 00:14:36.080 } 00:14:36.080 } 00:14:36.080 }' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:36.339 pt2 00:14:36.339 pt3 00:14:36.339 pt4' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.339 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 [2024-11-27 14:14:06.939257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b75b2862-16f2-4e13-812c-7a97529e6467 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b75b2862-16f2-4e13-812c-7a97529e6467 ']' 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 [2024-11-27 14:14:06.990863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.598 [2024-11-27 14:14:06.991018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.598 [2024-11-27 14:14:06.991267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.598 [2024-11-27 14:14:06.991483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.598 [2024-11-27 14:14:06.991629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:36.598 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.598 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.858 [2024-11-27 14:14:07.154952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:36.858 [2024-11-27 14:14:07.157811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:36.858 [2024-11-27 14:14:07.157915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:36.858 [2024-11-27 14:14:07.157975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:36.858 [2024-11-27 14:14:07.158068] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:36.858 [2024-11-27 14:14:07.158158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:36.858 [2024-11-27 14:14:07.158194] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:36.858 [2024-11-27 14:14:07.158228] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:36.858 [2024-11-27 14:14:07.158252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.858 [2024-11-27 14:14:07.158273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:36.858 request: 00:14:36.858 { 00:14:36.858 "name": "raid_bdev1", 00:14:36.858 "raid_level": "raid0", 00:14:36.858 "base_bdevs": [ 00:14:36.858 "malloc1", 00:14:36.858 "malloc2", 00:14:36.858 "malloc3", 00:14:36.858 "malloc4" 00:14:36.858 ], 00:14:36.858 "strip_size_kb": 64, 00:14:36.858 "superblock": false, 00:14:36.858 "method": "bdev_raid_create", 00:14:36.858 "req_id": 1 00:14:36.858 } 00:14:36.858 Got JSON-RPC error response 00:14:36.858 response: 00:14:36.858 { 00:14:36.858 "code": -17, 00:14:36.858 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:36.858 } 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.858 [2024-11-27 14:14:07.235078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.858 [2024-11-27 14:14:07.235170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.858 [2024-11-27 14:14:07.235212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:36.858 [2024-11-27 14:14:07.235233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.858 [2024-11-27 14:14:07.238434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.858 [2024-11-27 14:14:07.238493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.858 [2024-11-27 14:14:07.238612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:36.858 [2024-11-27 14:14:07.238699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.858 pt1 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.858 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.859 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.859 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.859 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.859 "name": "raid_bdev1", 00:14:36.859 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:36.859 "strip_size_kb": 64, 00:14:36.859 "state": "configuring", 00:14:36.859 "raid_level": "raid0", 00:14:36.859 "superblock": true, 00:14:36.859 "num_base_bdevs": 4, 00:14:36.859 "num_base_bdevs_discovered": 1, 00:14:36.859 "num_base_bdevs_operational": 4, 00:14:36.859 "base_bdevs_list": [ 00:14:36.859 { 00:14:36.859 "name": "pt1", 00:14:36.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.859 "is_configured": true, 00:14:36.859 "data_offset": 2048, 00:14:36.859 "data_size": 63488 00:14:36.859 }, 00:14:36.859 { 00:14:36.859 "name": null, 00:14:36.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.859 "is_configured": false, 00:14:36.859 "data_offset": 2048, 00:14:36.859 "data_size": 63488 00:14:36.859 }, 00:14:36.859 { 00:14:36.859 "name": null, 00:14:36.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.859 "is_configured": false, 00:14:36.859 "data_offset": 2048, 00:14:36.859 "data_size": 63488 00:14:36.859 }, 00:14:36.859 { 00:14:36.859 "name": null, 00:14:36.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.859 "is_configured": false, 00:14:36.859 "data_offset": 2048, 00:14:36.859 "data_size": 63488 00:14:36.859 } 00:14:36.859 ] 00:14:36.859 }' 00:14:36.859 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.859 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.426 [2024-11-27 14:14:07.795323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.426 [2024-11-27 14:14:07.795765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.426 [2024-11-27 14:14:07.795839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:37.426 [2024-11-27 14:14:07.795873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.426 [2024-11-27 14:14:07.796598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.426 [2024-11-27 14:14:07.796653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.426 [2024-11-27 14:14:07.796801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:37.426 [2024-11-27 14:14:07.796878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.426 pt2 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.426 [2024-11-27 14:14:07.803224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.426 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.426 "name": "raid_bdev1", 00:14:37.426 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:37.426 "strip_size_kb": 64, 00:14:37.426 "state": "configuring", 00:14:37.426 "raid_level": "raid0", 00:14:37.426 "superblock": true, 00:14:37.426 "num_base_bdevs": 4, 00:14:37.427 "num_base_bdevs_discovered": 1, 00:14:37.427 "num_base_bdevs_operational": 4, 00:14:37.427 "base_bdevs_list": [ 00:14:37.427 { 00:14:37.427 "name": "pt1", 00:14:37.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.427 "is_configured": true, 00:14:37.427 "data_offset": 2048, 00:14:37.427 "data_size": 63488 00:14:37.427 }, 00:14:37.427 { 00:14:37.427 "name": null, 00:14:37.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.427 "is_configured": false, 00:14:37.427 "data_offset": 0, 00:14:37.427 "data_size": 63488 00:14:37.427 }, 00:14:37.427 { 00:14:37.427 "name": null, 00:14:37.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.427 "is_configured": false, 00:14:37.427 "data_offset": 2048, 00:14:37.427 "data_size": 63488 00:14:37.427 }, 00:14:37.427 { 00:14:37.427 "name": null, 00:14:37.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.427 "is_configured": false, 00:14:37.427 "data_offset": 2048, 00:14:37.427 "data_size": 63488 00:14:37.427 } 00:14:37.427 ] 00:14:37.427 }' 00:14:37.427 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.427 14:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 [2024-11-27 14:14:08.339505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.995 [2024-11-27 14:14:08.339649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.995 [2024-11-27 14:14:08.339684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:37.995 [2024-11-27 14:14:08.339699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.995 [2024-11-27 14:14:08.340345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.995 [2024-11-27 14:14:08.340372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.995 [2024-11-27 14:14:08.340505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:37.995 [2024-11-27 14:14:08.340541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.995 pt2 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 [2024-11-27 14:14:08.351368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:37.995 [2024-11-27 14:14:08.351423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.995 [2024-11-27 14:14:08.351450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:37.995 [2024-11-27 14:14:08.351463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.995 [2024-11-27 14:14:08.351938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.995 [2024-11-27 14:14:08.351962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:37.995 [2024-11-27 14:14:08.352036] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:37.995 [2024-11-27 14:14:08.352070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:37.995 pt3 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 [2024-11-27 14:14:08.363350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:37.995 [2024-11-27 14:14:08.363412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.995 [2024-11-27 14:14:08.363438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:37.995 [2024-11-27 14:14:08.363461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.995 [2024-11-27 14:14:08.364003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.995 [2024-11-27 14:14:08.364028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:37.995 [2024-11-27 14:14:08.364108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:37.995 [2024-11-27 14:14:08.364142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:37.995 [2024-11-27 14:14:08.364323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:37.995 [2024-11-27 14:14:08.364544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:37.995 [2024-11-27 14:14:08.364907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.995 [2024-11-27 14:14:08.365108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:37.995 [2024-11-27 14:14:08.365131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:37.995 [2024-11-27 14:14:08.365309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.995 pt4 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.995 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.995 "name": "raid_bdev1", 00:14:37.995 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:37.995 "strip_size_kb": 64, 00:14:37.995 "state": "online", 00:14:37.995 "raid_level": "raid0", 00:14:37.995 "superblock": true, 00:14:37.995 "num_base_bdevs": 4, 00:14:37.995 "num_base_bdevs_discovered": 4, 00:14:37.996 "num_base_bdevs_operational": 4, 00:14:37.996 "base_bdevs_list": [ 00:14:37.996 { 00:14:37.996 "name": "pt1", 00:14:37.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 }, 00:14:37.996 { 00:14:37.996 "name": "pt2", 00:14:37.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 }, 00:14:37.996 { 00:14:37.996 "name": "pt3", 00:14:37.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 }, 00:14:37.996 { 00:14:37.996 "name": "pt4", 00:14:37.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.996 "is_configured": true, 00:14:37.996 "data_offset": 2048, 00:14:37.996 "data_size": 63488 00:14:37.996 } 00:14:37.996 ] 00:14:37.996 }' 00:14:37.996 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.996 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.587 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:38.588 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.588 [2024-11-27 14:14:08.888181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.588 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.588 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:38.588 "name": "raid_bdev1", 00:14:38.588 "aliases": [ 00:14:38.588 "b75b2862-16f2-4e13-812c-7a97529e6467" 00:14:38.588 ], 00:14:38.588 "product_name": "Raid Volume", 00:14:38.588 "block_size": 512, 00:14:38.588 "num_blocks": 253952, 00:14:38.588 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:38.588 "assigned_rate_limits": { 00:14:38.588 "rw_ios_per_sec": 0, 00:14:38.588 "rw_mbytes_per_sec": 0, 00:14:38.588 "r_mbytes_per_sec": 0, 00:14:38.588 "w_mbytes_per_sec": 0 00:14:38.588 }, 00:14:38.588 "claimed": false, 00:14:38.588 "zoned": false, 00:14:38.588 "supported_io_types": { 00:14:38.588 "read": true, 00:14:38.588 "write": true, 00:14:38.588 "unmap": true, 00:14:38.588 "flush": true, 00:14:38.588 "reset": true, 00:14:38.588 "nvme_admin": false, 00:14:38.588 "nvme_io": false, 00:14:38.588 "nvme_io_md": false, 00:14:38.588 "write_zeroes": true, 00:14:38.588 "zcopy": false, 00:14:38.588 "get_zone_info": false, 00:14:38.588 "zone_management": false, 00:14:38.588 "zone_append": false, 00:14:38.588 "compare": false, 00:14:38.588 "compare_and_write": false, 00:14:38.588 "abort": false, 00:14:38.588 "seek_hole": false, 00:14:38.588 "seek_data": false, 00:14:38.588 "copy": false, 00:14:38.588 "nvme_iov_md": false 00:14:38.588 }, 00:14:38.588 "memory_domains": [ 00:14:38.588 { 00:14:38.588 "dma_device_id": "system", 00:14:38.588 "dma_device_type": 1 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.588 "dma_device_type": 2 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "system", 00:14:38.588 "dma_device_type": 1 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.588 "dma_device_type": 2 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "system", 00:14:38.588 "dma_device_type": 1 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.588 "dma_device_type": 2 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "system", 00:14:38.588 "dma_device_type": 1 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.588 "dma_device_type": 2 00:14:38.588 } 00:14:38.588 ], 00:14:38.588 "driver_specific": { 00:14:38.588 "raid": { 00:14:38.588 "uuid": "b75b2862-16f2-4e13-812c-7a97529e6467", 00:14:38.588 "strip_size_kb": 64, 00:14:38.588 "state": "online", 00:14:38.588 "raid_level": "raid0", 00:14:38.588 "superblock": true, 00:14:38.588 "num_base_bdevs": 4, 00:14:38.588 "num_base_bdevs_discovered": 4, 00:14:38.588 "num_base_bdevs_operational": 4, 00:14:38.588 "base_bdevs_list": [ 00:14:38.588 { 00:14:38.588 "name": "pt1", 00:14:38.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.588 "is_configured": true, 00:14:38.588 "data_offset": 2048, 00:14:38.588 "data_size": 63488 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "name": "pt2", 00:14:38.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.588 "is_configured": true, 00:14:38.588 "data_offset": 2048, 00:14:38.588 "data_size": 63488 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "name": "pt3", 00:14:38.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.588 "is_configured": true, 00:14:38.588 "data_offset": 2048, 00:14:38.588 "data_size": 63488 00:14:38.588 }, 00:14:38.588 { 00:14:38.588 "name": "pt4", 00:14:38.588 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.588 "is_configured": true, 00:14:38.588 "data_offset": 2048, 00:14:38.588 "data_size": 63488 00:14:38.588 } 00:14:38.588 ] 00:14:38.588 } 00:14:38.588 } 00:14:38.588 }' 00:14:38.588 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.588 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:38.588 pt2 00:14:38.588 pt3 00:14:38.588 pt4' 00:14:38.588 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.588 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.866 [2024-11-27 14:14:09.268077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b75b2862-16f2-4e13-812c-7a97529e6467 '!=' b75b2862-16f2-4e13-812c-7a97529e6467 ']' 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:38.866 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71016 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71016 ']' 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71016 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71016 00:14:38.867 killing process with pid 71016 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71016' 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71016 00:14:38.867 [2024-11-27 14:14:09.341907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.867 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71016 00:14:38.867 [2024-11-27 14:14:09.342118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.867 [2024-11-27 14:14:09.342229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.867 [2024-11-27 14:14:09.342247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:39.434 [2024-11-27 14:14:09.729152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.808 14:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:40.808 00:14:40.808 real 0m6.174s 00:14:40.808 user 0m9.212s 00:14:40.808 sys 0m0.897s 00:14:40.808 14:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.808 ************************************ 00:14:40.808 END TEST raid_superblock_test 00:14:40.808 ************************************ 00:14:40.808 14:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.808 14:14:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:40.808 14:14:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:40.808 14:14:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.808 14:14:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.808 ************************************ 00:14:40.808 START TEST raid_read_error_test 00:14:40.808 ************************************ 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tNR5Dj7inB 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71287 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71287 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71287 ']' 00:14:40.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.808 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.808 [2024-11-27 14:14:11.055201] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:40.809 [2024-11-27 14:14:11.055568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71287 ] 00:14:40.809 [2024-11-27 14:14:11.236773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.067 [2024-11-27 14:14:11.415134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.326 [2024-11-27 14:14:11.654461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.326 [2024-11-27 14:14:11.654536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.585 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.585 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:41.585 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:41.585 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:41.585 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.585 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 BaseBdev1_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 true 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 [2024-11-27 14:14:12.138454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:41.844 [2024-11-27 14:14:12.138922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.844 [2024-11-27 14:14:12.138966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:41.844 [2024-11-27 14:14:12.138999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.844 [2024-11-27 14:14:12.142212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.844 [2024-11-27 14:14:12.142265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:41.844 BaseBdev1 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 BaseBdev2_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 true 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 [2024-11-27 14:14:12.204058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:41.844 [2024-11-27 14:14:12.204152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.844 [2024-11-27 14:14:12.204177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:41.844 [2024-11-27 14:14:12.204193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.844 [2024-11-27 14:14:12.207298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.844 [2024-11-27 14:14:12.207346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:41.844 BaseBdev2 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 BaseBdev3_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 true 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:41.844 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.845 [2024-11-27 14:14:12.277371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:41.845 [2024-11-27 14:14:12.277489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.845 [2024-11-27 14:14:12.277519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:41.845 [2024-11-27 14:14:12.277537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.845 [2024-11-27 14:14:12.280840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.845 [2024-11-27 14:14:12.281194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:41.845 BaseBdev3 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.845 BaseBdev4_malloc 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.845 true 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.845 [2024-11-27 14:14:12.339000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:41.845 [2024-11-27 14:14:12.339101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.845 [2024-11-27 14:14:12.339132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:41.845 [2024-11-27 14:14:12.339151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.845 [2024-11-27 14:14:12.342271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.845 [2024-11-27 14:14:12.342326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:41.845 BaseBdev4 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.845 [2024-11-27 14:14:12.347309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.845 [2024-11-27 14:14:12.350138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.845 [2024-11-27 14:14:12.350470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.845 [2024-11-27 14:14:12.350592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:41.845 [2024-11-27 14:14:12.350932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:41.845 [2024-11-27 14:14:12.350959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:41.845 [2024-11-27 14:14:12.351283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:41.845 [2024-11-27 14:14:12.351518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:41.845 [2024-11-27 14:14:12.351538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:41.845 [2024-11-27 14:14:12.351807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.845 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.105 "name": "raid_bdev1", 00:14:42.105 "uuid": "6706dbd0-6d37-4f82-8f45-008f908ffd9e", 00:14:42.105 "strip_size_kb": 64, 00:14:42.105 "state": "online", 00:14:42.105 "raid_level": "raid0", 00:14:42.105 "superblock": true, 00:14:42.105 "num_base_bdevs": 4, 00:14:42.105 "num_base_bdevs_discovered": 4, 00:14:42.105 "num_base_bdevs_operational": 4, 00:14:42.105 "base_bdevs_list": [ 00:14:42.105 { 00:14:42.105 "name": "BaseBdev1", 00:14:42.105 "uuid": "f166e2e4-4b9a-5c21-b23c-e31ea3816ce8", 00:14:42.105 "is_configured": true, 00:14:42.105 "data_offset": 2048, 00:14:42.105 "data_size": 63488 00:14:42.105 }, 00:14:42.105 { 00:14:42.105 "name": "BaseBdev2", 00:14:42.105 "uuid": "39a2fdec-881f-52af-b97f-eac23a726596", 00:14:42.105 "is_configured": true, 00:14:42.105 "data_offset": 2048, 00:14:42.105 "data_size": 63488 00:14:42.105 }, 00:14:42.105 { 00:14:42.105 "name": "BaseBdev3", 00:14:42.105 "uuid": "1b19fa1f-9546-5bbf-931b-87452652d90a", 00:14:42.105 "is_configured": true, 00:14:42.105 "data_offset": 2048, 00:14:42.105 "data_size": 63488 00:14:42.105 }, 00:14:42.105 { 00:14:42.105 "name": "BaseBdev4", 00:14:42.105 "uuid": "152b04fe-3765-5c6b-bf29-26a1f13e10ee", 00:14:42.105 "is_configured": true, 00:14:42.105 "data_offset": 2048, 00:14:42.105 "data_size": 63488 00:14:42.105 } 00:14:42.105 ] 00:14:42.105 }' 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.105 14:14:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.364 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:42.364 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:42.623 [2024-11-27 14:14:12.977847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.562 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.563 "name": "raid_bdev1", 00:14:43.563 "uuid": "6706dbd0-6d37-4f82-8f45-008f908ffd9e", 00:14:43.563 "strip_size_kb": 64, 00:14:43.563 "state": "online", 00:14:43.563 "raid_level": "raid0", 00:14:43.563 "superblock": true, 00:14:43.563 "num_base_bdevs": 4, 00:14:43.563 "num_base_bdevs_discovered": 4, 00:14:43.563 "num_base_bdevs_operational": 4, 00:14:43.563 "base_bdevs_list": [ 00:14:43.563 { 00:14:43.563 "name": "BaseBdev1", 00:14:43.563 "uuid": "f166e2e4-4b9a-5c21-b23c-e31ea3816ce8", 00:14:43.563 "is_configured": true, 00:14:43.563 "data_offset": 2048, 00:14:43.563 "data_size": 63488 00:14:43.563 }, 00:14:43.563 { 00:14:43.563 "name": "BaseBdev2", 00:14:43.563 "uuid": "39a2fdec-881f-52af-b97f-eac23a726596", 00:14:43.563 "is_configured": true, 00:14:43.563 "data_offset": 2048, 00:14:43.563 "data_size": 63488 00:14:43.563 }, 00:14:43.563 { 00:14:43.563 "name": "BaseBdev3", 00:14:43.563 "uuid": "1b19fa1f-9546-5bbf-931b-87452652d90a", 00:14:43.563 "is_configured": true, 00:14:43.563 "data_offset": 2048, 00:14:43.563 "data_size": 63488 00:14:43.563 }, 00:14:43.563 { 00:14:43.563 "name": "BaseBdev4", 00:14:43.563 "uuid": "152b04fe-3765-5c6b-bf29-26a1f13e10ee", 00:14:43.563 "is_configured": true, 00:14:43.563 "data_offset": 2048, 00:14:43.563 "data_size": 63488 00:14:43.563 } 00:14:43.563 ] 00:14:43.563 }' 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.563 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.129 [2024-11-27 14:14:14.400985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.129 [2024-11-27 14:14:14.401057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.129 [2024-11-27 14:14:14.405263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.129 [2024-11-27 14:14:14.405568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.129 [2024-11-27 14:14:14.405906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.129 [2024-11-27 14:14:14.406095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:14:44.129 "results": [ 00:14:44.129 { 00:14:44.129 "job": "raid_bdev1", 00:14:44.129 "core_mask": "0x1", 00:14:44.129 "workload": "randrw", 00:14:44.129 "percentage": 50, 00:14:44.129 "status": "finished", 00:14:44.129 "queue_depth": 1, 00:14:44.129 "io_size": 131072, 00:14:44.129 "runtime": 1.420398, 00:14:44.129 "iops": 9115.050851944314, 00:14:44.129 "mibps": 1139.3813564930392, 00:14:44.129 "io_failed": 1, 00:14:44.129 "io_timeout": 0, 00:14:44.129 "avg_latency_us": 154.63550874827985, 00:14:44.129 "min_latency_us": 39.33090909090909, 00:14:44.129 "max_latency_us": 1861.8181818181818 00:14:44.129 } 00:14:44.129 ], 00:14:44.129 "core_count": 1 00:14:44.129 } 00:14:44.129 te offline 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71287 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71287 ']' 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71287 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71287 00:14:44.129 killing process with pid 71287 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71287' 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71287 00:14:44.129 [2024-11-27 14:14:14.444046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.129 14:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71287 00:14:44.388 [2024-11-27 14:14:14.763909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.765 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:45.765 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tNR5Dj7inB 00:14:45.765 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:45.765 ************************************ 00:14:45.765 END TEST raid_read_error_test 00:14:45.765 ************************************ 00:14:45.765 00:14:45.765 real 0m5.057s 00:14:45.765 user 0m6.127s 00:14:45.765 sys 0m0.675s 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.765 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.765 14:14:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:45.765 14:14:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:45.765 14:14:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.765 14:14:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.765 ************************************ 00:14:45.765 START TEST raid_write_error_test 00:14:45.765 ************************************ 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IaPIReCyK0 00:14:45.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71433 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71433 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71433 ']' 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.765 14:14:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.765 [2024-11-27 14:14:16.184783] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:45.765 [2024-11-27 14:14:16.184973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:14:46.023 [2024-11-27 14:14:16.372604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.282 [2024-11-27 14:14:16.556661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.541 [2024-11-27 14:14:16.801828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.541 [2024-11-27 14:14:16.801944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.800 BaseBdev1_malloc 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.800 true 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.800 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.800 [2024-11-27 14:14:17.228857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:46.800 [2024-11-27 14:14:17.228961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.801 [2024-11-27 14:14:17.228991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:46.801 [2024-11-27 14:14:17.229015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.801 [2024-11-27 14:14:17.232476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.801 [2024-11-27 14:14:17.232677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:46.801 BaseBdev1 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.801 BaseBdev2_malloc 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.801 true 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.801 [2024-11-27 14:14:17.298490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:46.801 [2024-11-27 14:14:17.298578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.801 [2024-11-27 14:14:17.298602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:46.801 [2024-11-27 14:14:17.298618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.801 [2024-11-27 14:14:17.301545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.801 [2024-11-27 14:14:17.302037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:46.801 BaseBdev2 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.801 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 BaseBdev3_malloc 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 true 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 [2024-11-27 14:14:17.373792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:47.061 [2024-11-27 14:14:17.373905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.061 [2024-11-27 14:14:17.373932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:47.061 [2024-11-27 14:14:17.373950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.061 [2024-11-27 14:14:17.377109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.061 [2024-11-27 14:14:17.377157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:47.061 BaseBdev3 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 BaseBdev4_malloc 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 true 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 [2024-11-27 14:14:17.436341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:47.061 [2024-11-27 14:14:17.436491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.061 [2024-11-27 14:14:17.436519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:47.061 [2024-11-27 14:14:17.436546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.061 [2024-11-27 14:14:17.439939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.061 [2024-11-27 14:14:17.439983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:47.061 BaseBdev4 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 [2024-11-27 14:14:17.444653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.061 [2024-11-27 14:14:17.447453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.061 [2024-11-27 14:14:17.447559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.061 [2024-11-27 14:14:17.447657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.061 [2024-11-27 14:14:17.448068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:47.061 [2024-11-27 14:14:17.448094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:47.061 [2024-11-27 14:14:17.448450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:47.061 [2024-11-27 14:14:17.448686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:47.061 [2024-11-27 14:14:17.448705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:47.061 [2024-11-27 14:14:17.448988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.061 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.061 "name": "raid_bdev1", 00:14:47.061 "uuid": "00d672bb-6b8b-4f5d-a961-a40518f5852d", 00:14:47.061 "strip_size_kb": 64, 00:14:47.061 "state": "online", 00:14:47.061 "raid_level": "raid0", 00:14:47.061 "superblock": true, 00:14:47.061 "num_base_bdevs": 4, 00:14:47.061 "num_base_bdevs_discovered": 4, 00:14:47.061 "num_base_bdevs_operational": 4, 00:14:47.061 "base_bdevs_list": [ 00:14:47.061 { 00:14:47.061 "name": "BaseBdev1", 00:14:47.061 "uuid": "11144250-ec7c-5eab-a3b5-bdefcdf417d5", 00:14:47.061 "is_configured": true, 00:14:47.061 "data_offset": 2048, 00:14:47.061 "data_size": 63488 00:14:47.061 }, 00:14:47.061 { 00:14:47.061 "name": "BaseBdev2", 00:14:47.061 "uuid": "052c1add-05d9-5a9d-96a3-cf75b52d206f", 00:14:47.061 "is_configured": true, 00:14:47.061 "data_offset": 2048, 00:14:47.061 "data_size": 63488 00:14:47.061 }, 00:14:47.061 { 00:14:47.061 "name": "BaseBdev3", 00:14:47.062 "uuid": "592fdb41-15ae-53bb-abf8-02c2324a6883", 00:14:47.062 "is_configured": true, 00:14:47.062 "data_offset": 2048, 00:14:47.062 "data_size": 63488 00:14:47.062 }, 00:14:47.062 { 00:14:47.062 "name": "BaseBdev4", 00:14:47.062 "uuid": "20b2579f-4124-526e-b111-cfdfcc2300aa", 00:14:47.062 "is_configured": true, 00:14:47.062 "data_offset": 2048, 00:14:47.062 "data_size": 63488 00:14:47.062 } 00:14:47.062 ] 00:14:47.062 }' 00:14:47.062 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.062 14:14:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.632 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:47.632 14:14:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:47.632 [2024-11-27 14:14:18.074780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.592 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.593 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.593 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.593 "name": "raid_bdev1", 00:14:48.593 "uuid": "00d672bb-6b8b-4f5d-a961-a40518f5852d", 00:14:48.593 "strip_size_kb": 64, 00:14:48.593 "state": "online", 00:14:48.593 "raid_level": "raid0", 00:14:48.593 "superblock": true, 00:14:48.593 "num_base_bdevs": 4, 00:14:48.593 "num_base_bdevs_discovered": 4, 00:14:48.593 "num_base_bdevs_operational": 4, 00:14:48.593 "base_bdevs_list": [ 00:14:48.593 { 00:14:48.593 "name": "BaseBdev1", 00:14:48.593 "uuid": "11144250-ec7c-5eab-a3b5-bdefcdf417d5", 00:14:48.593 "is_configured": true, 00:14:48.593 "data_offset": 2048, 00:14:48.593 "data_size": 63488 00:14:48.593 }, 00:14:48.593 { 00:14:48.593 "name": "BaseBdev2", 00:14:48.593 "uuid": "052c1add-05d9-5a9d-96a3-cf75b52d206f", 00:14:48.593 "is_configured": true, 00:14:48.593 "data_offset": 2048, 00:14:48.593 "data_size": 63488 00:14:48.593 }, 00:14:48.593 { 00:14:48.593 "name": "BaseBdev3", 00:14:48.593 "uuid": "592fdb41-15ae-53bb-abf8-02c2324a6883", 00:14:48.593 "is_configured": true, 00:14:48.593 "data_offset": 2048, 00:14:48.593 "data_size": 63488 00:14:48.593 }, 00:14:48.593 { 00:14:48.593 "name": "BaseBdev4", 00:14:48.593 "uuid": "20b2579f-4124-526e-b111-cfdfcc2300aa", 00:14:48.593 "is_configured": true, 00:14:48.593 "data_offset": 2048, 00:14:48.593 "data_size": 63488 00:14:48.593 } 00:14:48.593 ] 00:14:48.593 }' 00:14:48.593 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.593 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.160 [2024-11-27 14:14:19.515025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.160 [2024-11-27 14:14:19.515201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.160 [2024-11-27 14:14:19.518980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.160 [2024-11-27 14:14:19.519253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.160 [2024-11-27 14:14:19.519443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.160 [2024-11-27 14:14:19.519606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:14:49.160 "results": [ 00:14:49.160 { 00:14:49.160 "job": "raid_bdev1", 00:14:49.160 "core_mask": "0x1", 00:14:49.160 "workload": "randrw", 00:14:49.160 "percentage": 50, 00:14:49.160 "status": "finished", 00:14:49.160 "queue_depth": 1, 00:14:49.160 "io_size": 131072, 00:14:49.160 "runtime": 1.43777, 00:14:49.160 "iops": 9130.806735430562, 00:14:49.160 "mibps": 1141.3508419288203, 00:14:49.160 "io_failed": 1, 00:14:49.160 "io_timeout": 0, 00:14:49.160 "avg_latency_us": 153.68185072601253, 00:14:49.160 "min_latency_us": 39.79636363636364, 00:14:49.160 "max_latency_us": 2383.1272727272726 00:14:49.160 } 00:14:49.160 ], 00:14:49.160 "core_count": 1 00:14:49.160 } 00:14:49.160 te offline 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71433 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71433 ']' 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71433 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71433 00:14:49.160 killing process with pid 71433 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71433' 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71433 00:14:49.160 [2024-11-27 14:14:19.559662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.160 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71433 00:14:49.418 [2024-11-27 14:14:19.866259] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IaPIReCyK0 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:50.796 ************************************ 00:14:50.796 END TEST raid_write_error_test 00:14:50.796 ************************************ 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:50.796 00:14:50.796 real 0m4.941s 00:14:50.796 user 0m5.988s 00:14:50.796 sys 0m0.673s 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.796 14:14:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.796 14:14:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:50.796 14:14:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:50.796 14:14:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:50.796 14:14:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.796 14:14:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.796 ************************************ 00:14:50.796 START TEST raid_state_function_test 00:14:50.796 ************************************ 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:50.796 Process raid pid: 71582 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71582 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71582' 00:14:50.796 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71582 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71582 ']' 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.797 14:14:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.797 [2024-11-27 14:14:21.166904] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:14:50.797 [2024-11-27 14:14:21.167329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.055 [2024-11-27 14:14:21.358037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.055 [2024-11-27 14:14:21.510151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.314 [2024-11-27 14:14:21.723445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.314 [2024-11-27 14:14:21.723482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.881 [2024-11-27 14:14:22.140810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.881 [2024-11-27 14:14:22.140887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.881 [2024-11-27 14:14:22.140905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.881 [2024-11-27 14:14:22.140922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.881 [2024-11-27 14:14:22.140939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.881 [2024-11-27 14:14:22.140953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.881 [2024-11-27 14:14:22.140962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.881 [2024-11-27 14:14:22.140976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.881 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.882 "name": "Existed_Raid", 00:14:51.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.882 "strip_size_kb": 64, 00:14:51.882 "state": "configuring", 00:14:51.882 "raid_level": "concat", 00:14:51.882 "superblock": false, 00:14:51.882 "num_base_bdevs": 4, 00:14:51.882 "num_base_bdevs_discovered": 0, 00:14:51.882 "num_base_bdevs_operational": 4, 00:14:51.882 "base_bdevs_list": [ 00:14:51.882 { 00:14:51.882 "name": "BaseBdev1", 00:14:51.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.882 "is_configured": false, 00:14:51.882 "data_offset": 0, 00:14:51.882 "data_size": 0 00:14:51.882 }, 00:14:51.882 { 00:14:51.882 "name": "BaseBdev2", 00:14:51.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.882 "is_configured": false, 00:14:51.882 "data_offset": 0, 00:14:51.882 "data_size": 0 00:14:51.882 }, 00:14:51.882 { 00:14:51.882 "name": "BaseBdev3", 00:14:51.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.882 "is_configured": false, 00:14:51.882 "data_offset": 0, 00:14:51.882 "data_size": 0 00:14:51.882 }, 00:14:51.882 { 00:14:51.882 "name": "BaseBdev4", 00:14:51.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.882 "is_configured": false, 00:14:51.882 "data_offset": 0, 00:14:51.882 "data_size": 0 00:14:51.882 } 00:14:51.882 ] 00:14:51.882 }' 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.882 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.141 [2024-11-27 14:14:22.632957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.141 [2024-11-27 14:14:22.633002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.141 [2024-11-27 14:14:22.640882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.141 [2024-11-27 14:14:22.641080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.141 [2024-11-27 14:14:22.641107] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.141 [2024-11-27 14:14:22.641124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.141 [2024-11-27 14:14:22.641134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.141 [2024-11-27 14:14:22.641148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.141 [2024-11-27 14:14:22.641158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.141 [2024-11-27 14:14:22.641171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.141 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.400 [2024-11-27 14:14:22.687178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.400 BaseBdev1 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.400 [ 00:14:52.400 { 00:14:52.400 "name": "BaseBdev1", 00:14:52.400 "aliases": [ 00:14:52.400 "c629a154-d2b1-4190-99b2-ef49cfd95ec8" 00:14:52.400 ], 00:14:52.400 "product_name": "Malloc disk", 00:14:52.400 "block_size": 512, 00:14:52.400 "num_blocks": 65536, 00:14:52.400 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:52.400 "assigned_rate_limits": { 00:14:52.400 "rw_ios_per_sec": 0, 00:14:52.400 "rw_mbytes_per_sec": 0, 00:14:52.400 "r_mbytes_per_sec": 0, 00:14:52.400 "w_mbytes_per_sec": 0 00:14:52.400 }, 00:14:52.400 "claimed": true, 00:14:52.400 "claim_type": "exclusive_write", 00:14:52.400 "zoned": false, 00:14:52.400 "supported_io_types": { 00:14:52.400 "read": true, 00:14:52.400 "write": true, 00:14:52.400 "unmap": true, 00:14:52.400 "flush": true, 00:14:52.400 "reset": true, 00:14:52.400 "nvme_admin": false, 00:14:52.400 "nvme_io": false, 00:14:52.400 "nvme_io_md": false, 00:14:52.400 "write_zeroes": true, 00:14:52.400 "zcopy": true, 00:14:52.400 "get_zone_info": false, 00:14:52.400 "zone_management": false, 00:14:52.400 "zone_append": false, 00:14:52.400 "compare": false, 00:14:52.400 "compare_and_write": false, 00:14:52.400 "abort": true, 00:14:52.400 "seek_hole": false, 00:14:52.400 "seek_data": false, 00:14:52.400 "copy": true, 00:14:52.400 "nvme_iov_md": false 00:14:52.400 }, 00:14:52.400 "memory_domains": [ 00:14:52.400 { 00:14:52.400 "dma_device_id": "system", 00:14:52.400 "dma_device_type": 1 00:14:52.400 }, 00:14:52.400 { 00:14:52.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.400 "dma_device_type": 2 00:14:52.400 } 00:14:52.400 ], 00:14:52.400 "driver_specific": {} 00:14:52.400 } 00:14:52.400 ] 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.400 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.400 "name": "Existed_Raid", 00:14:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.400 "strip_size_kb": 64, 00:14:52.400 "state": "configuring", 00:14:52.400 "raid_level": "concat", 00:14:52.400 "superblock": false, 00:14:52.400 "num_base_bdevs": 4, 00:14:52.400 "num_base_bdevs_discovered": 1, 00:14:52.400 "num_base_bdevs_operational": 4, 00:14:52.400 "base_bdevs_list": [ 00:14:52.400 { 00:14:52.400 "name": "BaseBdev1", 00:14:52.400 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:52.400 "is_configured": true, 00:14:52.400 "data_offset": 0, 00:14:52.400 "data_size": 65536 00:14:52.400 }, 00:14:52.400 { 00:14:52.400 "name": "BaseBdev2", 00:14:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.400 "is_configured": false, 00:14:52.400 "data_offset": 0, 00:14:52.400 "data_size": 0 00:14:52.400 }, 00:14:52.400 { 00:14:52.400 "name": "BaseBdev3", 00:14:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.400 "is_configured": false, 00:14:52.400 "data_offset": 0, 00:14:52.400 "data_size": 0 00:14:52.400 }, 00:14:52.400 { 00:14:52.400 "name": "BaseBdev4", 00:14:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.400 "is_configured": false, 00:14:52.400 "data_offset": 0, 00:14:52.400 "data_size": 0 00:14:52.400 } 00:14:52.400 ] 00:14:52.400 }' 00:14:52.401 14:14:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.401 14:14:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.969 [2024-11-27 14:14:23.223397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.969 [2024-11-27 14:14:23.223473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.969 [2024-11-27 14:14:23.231415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.969 [2024-11-27 14:14:23.234098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.969 [2024-11-27 14:14:23.234268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.969 [2024-11-27 14:14:23.234395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.969 [2024-11-27 14:14:23.234457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.969 [2024-11-27 14:14:23.234676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.969 [2024-11-27 14:14:23.234744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.969 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.969 "name": "Existed_Raid", 00:14:52.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.969 "strip_size_kb": 64, 00:14:52.969 "state": "configuring", 00:14:52.969 "raid_level": "concat", 00:14:52.969 "superblock": false, 00:14:52.969 "num_base_bdevs": 4, 00:14:52.969 "num_base_bdevs_discovered": 1, 00:14:52.970 "num_base_bdevs_operational": 4, 00:14:52.970 "base_bdevs_list": [ 00:14:52.970 { 00:14:52.970 "name": "BaseBdev1", 00:14:52.970 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:52.970 "is_configured": true, 00:14:52.970 "data_offset": 0, 00:14:52.970 "data_size": 65536 00:14:52.970 }, 00:14:52.970 { 00:14:52.970 "name": "BaseBdev2", 00:14:52.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.970 "is_configured": false, 00:14:52.970 "data_offset": 0, 00:14:52.970 "data_size": 0 00:14:52.970 }, 00:14:52.970 { 00:14:52.970 "name": "BaseBdev3", 00:14:52.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.970 "is_configured": false, 00:14:52.970 "data_offset": 0, 00:14:52.970 "data_size": 0 00:14:52.970 }, 00:14:52.970 { 00:14:52.970 "name": "BaseBdev4", 00:14:52.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.970 "is_configured": false, 00:14:52.970 "data_offset": 0, 00:14:52.970 "data_size": 0 00:14:52.970 } 00:14:52.970 ] 00:14:52.970 }' 00:14:52.970 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.970 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.551 [2024-11-27 14:14:23.803038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.551 BaseBdev2 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.551 [ 00:14:53.551 { 00:14:53.551 "name": "BaseBdev2", 00:14:53.551 "aliases": [ 00:14:53.551 "30592943-8b9a-4c32-bcd0-663e415c0a97" 00:14:53.551 ], 00:14:53.551 "product_name": "Malloc disk", 00:14:53.551 "block_size": 512, 00:14:53.551 "num_blocks": 65536, 00:14:53.551 "uuid": "30592943-8b9a-4c32-bcd0-663e415c0a97", 00:14:53.551 "assigned_rate_limits": { 00:14:53.551 "rw_ios_per_sec": 0, 00:14:53.551 "rw_mbytes_per_sec": 0, 00:14:53.551 "r_mbytes_per_sec": 0, 00:14:53.551 "w_mbytes_per_sec": 0 00:14:53.551 }, 00:14:53.551 "claimed": true, 00:14:53.551 "claim_type": "exclusive_write", 00:14:53.551 "zoned": false, 00:14:53.551 "supported_io_types": { 00:14:53.551 "read": true, 00:14:53.551 "write": true, 00:14:53.551 "unmap": true, 00:14:53.551 "flush": true, 00:14:53.551 "reset": true, 00:14:53.551 "nvme_admin": false, 00:14:53.551 "nvme_io": false, 00:14:53.551 "nvme_io_md": false, 00:14:53.551 "write_zeroes": true, 00:14:53.551 "zcopy": true, 00:14:53.551 "get_zone_info": false, 00:14:53.551 "zone_management": false, 00:14:53.551 "zone_append": false, 00:14:53.551 "compare": false, 00:14:53.551 "compare_and_write": false, 00:14:53.551 "abort": true, 00:14:53.551 "seek_hole": false, 00:14:53.551 "seek_data": false, 00:14:53.551 "copy": true, 00:14:53.551 "nvme_iov_md": false 00:14:53.551 }, 00:14:53.551 "memory_domains": [ 00:14:53.551 { 00:14:53.551 "dma_device_id": "system", 00:14:53.551 "dma_device_type": 1 00:14:53.551 }, 00:14:53.551 { 00:14:53.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.551 "dma_device_type": 2 00:14:53.551 } 00:14:53.551 ], 00:14:53.551 "driver_specific": {} 00:14:53.551 } 00:14:53.551 ] 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.551 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.552 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.552 "name": "Existed_Raid", 00:14:53.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.552 "strip_size_kb": 64, 00:14:53.552 "state": "configuring", 00:14:53.552 "raid_level": "concat", 00:14:53.552 "superblock": false, 00:14:53.552 "num_base_bdevs": 4, 00:14:53.552 "num_base_bdevs_discovered": 2, 00:14:53.552 "num_base_bdevs_operational": 4, 00:14:53.552 "base_bdevs_list": [ 00:14:53.552 { 00:14:53.552 "name": "BaseBdev1", 00:14:53.552 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev2", 00:14:53.552 "uuid": "30592943-8b9a-4c32-bcd0-663e415c0a97", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev3", 00:14:53.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.552 "is_configured": false, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 0 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev4", 00:14:53.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.552 "is_configured": false, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 0 00:14:53.552 } 00:14:53.552 ] 00:14:53.552 }' 00:14:53.552 14:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.552 14:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.118 [2024-11-27 14:14:24.400258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.118 BaseBdev3 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.118 [ 00:14:54.118 { 00:14:54.118 "name": "BaseBdev3", 00:14:54.118 "aliases": [ 00:14:54.118 "e33fc647-9379-440f-9cbb-6ca2e26cc6f8" 00:14:54.118 ], 00:14:54.118 "product_name": "Malloc disk", 00:14:54.118 "block_size": 512, 00:14:54.118 "num_blocks": 65536, 00:14:54.118 "uuid": "e33fc647-9379-440f-9cbb-6ca2e26cc6f8", 00:14:54.118 "assigned_rate_limits": { 00:14:54.118 "rw_ios_per_sec": 0, 00:14:54.118 "rw_mbytes_per_sec": 0, 00:14:54.118 "r_mbytes_per_sec": 0, 00:14:54.118 "w_mbytes_per_sec": 0 00:14:54.118 }, 00:14:54.118 "claimed": true, 00:14:54.118 "claim_type": "exclusive_write", 00:14:54.118 "zoned": false, 00:14:54.118 "supported_io_types": { 00:14:54.118 "read": true, 00:14:54.118 "write": true, 00:14:54.118 "unmap": true, 00:14:54.118 "flush": true, 00:14:54.118 "reset": true, 00:14:54.118 "nvme_admin": false, 00:14:54.118 "nvme_io": false, 00:14:54.118 "nvme_io_md": false, 00:14:54.118 "write_zeroes": true, 00:14:54.118 "zcopy": true, 00:14:54.118 "get_zone_info": false, 00:14:54.118 "zone_management": false, 00:14:54.118 "zone_append": false, 00:14:54.118 "compare": false, 00:14:54.118 "compare_and_write": false, 00:14:54.118 "abort": true, 00:14:54.118 "seek_hole": false, 00:14:54.118 "seek_data": false, 00:14:54.118 "copy": true, 00:14:54.118 "nvme_iov_md": false 00:14:54.118 }, 00:14:54.118 "memory_domains": [ 00:14:54.118 { 00:14:54.118 "dma_device_id": "system", 00:14:54.118 "dma_device_type": 1 00:14:54.118 }, 00:14:54.118 { 00:14:54.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.118 "dma_device_type": 2 00:14:54.118 } 00:14:54.118 ], 00:14:54.118 "driver_specific": {} 00:14:54.118 } 00:14:54.118 ] 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.118 "name": "Existed_Raid", 00:14:54.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.118 "strip_size_kb": 64, 00:14:54.118 "state": "configuring", 00:14:54.118 "raid_level": "concat", 00:14:54.118 "superblock": false, 00:14:54.118 "num_base_bdevs": 4, 00:14:54.118 "num_base_bdevs_discovered": 3, 00:14:54.118 "num_base_bdevs_operational": 4, 00:14:54.118 "base_bdevs_list": [ 00:14:54.118 { 00:14:54.118 "name": "BaseBdev1", 00:14:54.118 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:54.118 "is_configured": true, 00:14:54.118 "data_offset": 0, 00:14:54.118 "data_size": 65536 00:14:54.118 }, 00:14:54.118 { 00:14:54.118 "name": "BaseBdev2", 00:14:54.118 "uuid": "30592943-8b9a-4c32-bcd0-663e415c0a97", 00:14:54.118 "is_configured": true, 00:14:54.118 "data_offset": 0, 00:14:54.118 "data_size": 65536 00:14:54.118 }, 00:14:54.118 { 00:14:54.118 "name": "BaseBdev3", 00:14:54.118 "uuid": "e33fc647-9379-440f-9cbb-6ca2e26cc6f8", 00:14:54.118 "is_configured": true, 00:14:54.118 "data_offset": 0, 00:14:54.118 "data_size": 65536 00:14:54.118 }, 00:14:54.118 { 00:14:54.118 "name": "BaseBdev4", 00:14:54.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.118 "is_configured": false, 00:14:54.118 "data_offset": 0, 00:14:54.118 "data_size": 0 00:14:54.118 } 00:14:54.118 ] 00:14:54.118 }' 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.118 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.685 14:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:54.685 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.685 14:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.685 [2024-11-27 14:14:25.000579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.685 [2024-11-27 14:14:25.000633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:54.685 [2024-11-27 14:14:25.000645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:54.685 [2024-11-27 14:14:25.001035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:54.685 [2024-11-27 14:14:25.001294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:54.685 [2024-11-27 14:14:25.001334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:54.685 [2024-11-27 14:14:25.001675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.685 BaseBdev4 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.685 [ 00:14:54.685 { 00:14:54.685 "name": "BaseBdev4", 00:14:54.685 "aliases": [ 00:14:54.685 "1cec7d7f-9049-4043-9044-9f3a2785f344" 00:14:54.685 ], 00:14:54.685 "product_name": "Malloc disk", 00:14:54.685 "block_size": 512, 00:14:54.685 "num_blocks": 65536, 00:14:54.685 "uuid": "1cec7d7f-9049-4043-9044-9f3a2785f344", 00:14:54.685 "assigned_rate_limits": { 00:14:54.685 "rw_ios_per_sec": 0, 00:14:54.685 "rw_mbytes_per_sec": 0, 00:14:54.685 "r_mbytes_per_sec": 0, 00:14:54.685 "w_mbytes_per_sec": 0 00:14:54.685 }, 00:14:54.685 "claimed": true, 00:14:54.685 "claim_type": "exclusive_write", 00:14:54.685 "zoned": false, 00:14:54.685 "supported_io_types": { 00:14:54.685 "read": true, 00:14:54.685 "write": true, 00:14:54.685 "unmap": true, 00:14:54.685 "flush": true, 00:14:54.685 "reset": true, 00:14:54.685 "nvme_admin": false, 00:14:54.685 "nvme_io": false, 00:14:54.685 "nvme_io_md": false, 00:14:54.685 "write_zeroes": true, 00:14:54.685 "zcopy": true, 00:14:54.685 "get_zone_info": false, 00:14:54.685 "zone_management": false, 00:14:54.685 "zone_append": false, 00:14:54.685 "compare": false, 00:14:54.685 "compare_and_write": false, 00:14:54.685 "abort": true, 00:14:54.685 "seek_hole": false, 00:14:54.685 "seek_data": false, 00:14:54.685 "copy": true, 00:14:54.685 "nvme_iov_md": false 00:14:54.685 }, 00:14:54.685 "memory_domains": [ 00:14:54.685 { 00:14:54.685 "dma_device_id": "system", 00:14:54.685 "dma_device_type": 1 00:14:54.685 }, 00:14:54.685 { 00:14:54.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.685 "dma_device_type": 2 00:14:54.685 } 00:14:54.685 ], 00:14:54.685 "driver_specific": {} 00:14:54.685 } 00:14:54.685 ] 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.685 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.685 "name": "Existed_Raid", 00:14:54.685 "uuid": "8c167fca-9bd2-4dc4-b973-c7f1ce3acf59", 00:14:54.685 "strip_size_kb": 64, 00:14:54.685 "state": "online", 00:14:54.685 "raid_level": "concat", 00:14:54.685 "superblock": false, 00:14:54.685 "num_base_bdevs": 4, 00:14:54.685 "num_base_bdevs_discovered": 4, 00:14:54.685 "num_base_bdevs_operational": 4, 00:14:54.685 "base_bdevs_list": [ 00:14:54.685 { 00:14:54.685 "name": "BaseBdev1", 00:14:54.685 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:54.685 "is_configured": true, 00:14:54.685 "data_offset": 0, 00:14:54.685 "data_size": 65536 00:14:54.685 }, 00:14:54.685 { 00:14:54.685 "name": "BaseBdev2", 00:14:54.685 "uuid": "30592943-8b9a-4c32-bcd0-663e415c0a97", 00:14:54.685 "is_configured": true, 00:14:54.685 "data_offset": 0, 00:14:54.685 "data_size": 65536 00:14:54.685 }, 00:14:54.685 { 00:14:54.685 "name": "BaseBdev3", 00:14:54.685 "uuid": "e33fc647-9379-440f-9cbb-6ca2e26cc6f8", 00:14:54.685 "is_configured": true, 00:14:54.685 "data_offset": 0, 00:14:54.685 "data_size": 65536 00:14:54.685 }, 00:14:54.685 { 00:14:54.685 "name": "BaseBdev4", 00:14:54.685 "uuid": "1cec7d7f-9049-4043-9044-9f3a2785f344", 00:14:54.685 "is_configured": true, 00:14:54.685 "data_offset": 0, 00:14:54.685 "data_size": 65536 00:14:54.685 } 00:14:54.686 ] 00:14:54.686 }' 00:14:54.686 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.686 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.253 [2024-11-27 14:14:25.529308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.253 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.254 "name": "Existed_Raid", 00:14:55.254 "aliases": [ 00:14:55.254 "8c167fca-9bd2-4dc4-b973-c7f1ce3acf59" 00:14:55.254 ], 00:14:55.254 "product_name": "Raid Volume", 00:14:55.254 "block_size": 512, 00:14:55.254 "num_blocks": 262144, 00:14:55.254 "uuid": "8c167fca-9bd2-4dc4-b973-c7f1ce3acf59", 00:14:55.254 "assigned_rate_limits": { 00:14:55.254 "rw_ios_per_sec": 0, 00:14:55.254 "rw_mbytes_per_sec": 0, 00:14:55.254 "r_mbytes_per_sec": 0, 00:14:55.254 "w_mbytes_per_sec": 0 00:14:55.254 }, 00:14:55.254 "claimed": false, 00:14:55.254 "zoned": false, 00:14:55.254 "supported_io_types": { 00:14:55.254 "read": true, 00:14:55.254 "write": true, 00:14:55.254 "unmap": true, 00:14:55.254 "flush": true, 00:14:55.254 "reset": true, 00:14:55.254 "nvme_admin": false, 00:14:55.254 "nvme_io": false, 00:14:55.254 "nvme_io_md": false, 00:14:55.254 "write_zeroes": true, 00:14:55.254 "zcopy": false, 00:14:55.254 "get_zone_info": false, 00:14:55.254 "zone_management": false, 00:14:55.254 "zone_append": false, 00:14:55.254 "compare": false, 00:14:55.254 "compare_and_write": false, 00:14:55.254 "abort": false, 00:14:55.254 "seek_hole": false, 00:14:55.254 "seek_data": false, 00:14:55.254 "copy": false, 00:14:55.254 "nvme_iov_md": false 00:14:55.254 }, 00:14:55.254 "memory_domains": [ 00:14:55.254 { 00:14:55.254 "dma_device_id": "system", 00:14:55.254 "dma_device_type": 1 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.254 "dma_device_type": 2 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "system", 00:14:55.254 "dma_device_type": 1 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.254 "dma_device_type": 2 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "system", 00:14:55.254 "dma_device_type": 1 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.254 "dma_device_type": 2 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "system", 00:14:55.254 "dma_device_type": 1 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.254 "dma_device_type": 2 00:14:55.254 } 00:14:55.254 ], 00:14:55.254 "driver_specific": { 00:14:55.254 "raid": { 00:14:55.254 "uuid": "8c167fca-9bd2-4dc4-b973-c7f1ce3acf59", 00:14:55.254 "strip_size_kb": 64, 00:14:55.254 "state": "online", 00:14:55.254 "raid_level": "concat", 00:14:55.254 "superblock": false, 00:14:55.254 "num_base_bdevs": 4, 00:14:55.254 "num_base_bdevs_discovered": 4, 00:14:55.254 "num_base_bdevs_operational": 4, 00:14:55.254 "base_bdevs_list": [ 00:14:55.254 { 00:14:55.254 "name": "BaseBdev1", 00:14:55.254 "uuid": "c629a154-d2b1-4190-99b2-ef49cfd95ec8", 00:14:55.254 "is_configured": true, 00:14:55.254 "data_offset": 0, 00:14:55.254 "data_size": 65536 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "name": "BaseBdev2", 00:14:55.254 "uuid": "30592943-8b9a-4c32-bcd0-663e415c0a97", 00:14:55.254 "is_configured": true, 00:14:55.254 "data_offset": 0, 00:14:55.254 "data_size": 65536 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "name": "BaseBdev3", 00:14:55.254 "uuid": "e33fc647-9379-440f-9cbb-6ca2e26cc6f8", 00:14:55.254 "is_configured": true, 00:14:55.254 "data_offset": 0, 00:14:55.254 "data_size": 65536 00:14:55.254 }, 00:14:55.254 { 00:14:55.254 "name": "BaseBdev4", 00:14:55.254 "uuid": "1cec7d7f-9049-4043-9044-9f3a2785f344", 00:14:55.254 "is_configured": true, 00:14:55.254 "data_offset": 0, 00:14:55.254 "data_size": 65536 00:14:55.254 } 00:14:55.254 ] 00:14:55.254 } 00:14:55.254 } 00:14:55.254 }' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:55.254 BaseBdev2 00:14:55.254 BaseBdev3 00:14:55.254 BaseBdev4' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.254 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 [2024-11-27 14:14:25.909128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.514 [2024-11-27 14:14:25.909535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.514 [2024-11-27 14:14:25.909725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.514 14:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.514 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.773 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.773 "name": "Existed_Raid", 00:14:55.773 "uuid": "8c167fca-9bd2-4dc4-b973-c7f1ce3acf59", 00:14:55.773 "strip_size_kb": 64, 00:14:55.773 "state": "offline", 00:14:55.773 "raid_level": "concat", 00:14:55.773 "superblock": false, 00:14:55.773 "num_base_bdevs": 4, 00:14:55.773 "num_base_bdevs_discovered": 3, 00:14:55.773 "num_base_bdevs_operational": 3, 00:14:55.773 "base_bdevs_list": [ 00:14:55.774 { 00:14:55.774 "name": null, 00:14:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.774 "is_configured": false, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 65536 00:14:55.774 }, 00:14:55.774 { 00:14:55.774 "name": "BaseBdev2", 00:14:55.774 "uuid": "30592943-8b9a-4c32-bcd0-663e415c0a97", 00:14:55.774 "is_configured": true, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 65536 00:14:55.774 }, 00:14:55.774 { 00:14:55.774 "name": "BaseBdev3", 00:14:55.774 "uuid": "e33fc647-9379-440f-9cbb-6ca2e26cc6f8", 00:14:55.774 "is_configured": true, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 65536 00:14:55.774 }, 00:14:55.774 { 00:14:55.774 "name": "BaseBdev4", 00:14:55.774 "uuid": "1cec7d7f-9049-4043-9044-9f3a2785f344", 00:14:55.774 "is_configured": true, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 65536 00:14:55.774 } 00:14:55.774 ] 00:14:55.774 }' 00:14:55.774 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.774 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.033 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:56.033 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 [2024-11-27 14:14:26.604551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.291 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 [2024-11-27 14:14:26.755590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.550 [2024-11-27 14:14:26.906093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:56.550 [2024-11-27 14:14:26.906187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.550 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.550 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.810 BaseBdev2 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.810 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.810 [ 00:14:56.810 { 00:14:56.810 "name": "BaseBdev2", 00:14:56.810 "aliases": [ 00:14:56.810 "b10d8d73-8864-4c1b-a611-24c17125d93f" 00:14:56.810 ], 00:14:56.810 "product_name": "Malloc disk", 00:14:56.810 "block_size": 512, 00:14:56.810 "num_blocks": 65536, 00:14:56.810 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:14:56.810 "assigned_rate_limits": { 00:14:56.810 "rw_ios_per_sec": 0, 00:14:56.810 "rw_mbytes_per_sec": 0, 00:14:56.810 "r_mbytes_per_sec": 0, 00:14:56.810 "w_mbytes_per_sec": 0 00:14:56.810 }, 00:14:56.810 "claimed": false, 00:14:56.810 "zoned": false, 00:14:56.810 "supported_io_types": { 00:14:56.810 "read": true, 00:14:56.810 "write": true, 00:14:56.810 "unmap": true, 00:14:56.810 "flush": true, 00:14:56.810 "reset": true, 00:14:56.810 "nvme_admin": false, 00:14:56.810 "nvme_io": false, 00:14:56.810 "nvme_io_md": false, 00:14:56.810 "write_zeroes": true, 00:14:56.810 "zcopy": true, 00:14:56.810 "get_zone_info": false, 00:14:56.810 "zone_management": false, 00:14:56.810 "zone_append": false, 00:14:56.810 "compare": false, 00:14:56.810 "compare_and_write": false, 00:14:56.810 "abort": true, 00:14:56.810 "seek_hole": false, 00:14:56.810 "seek_data": false, 00:14:56.810 "copy": true, 00:14:56.810 "nvme_iov_md": false 00:14:56.810 }, 00:14:56.810 "memory_domains": [ 00:14:56.810 { 00:14:56.810 "dma_device_id": "system", 00:14:56.811 "dma_device_type": 1 00:14:56.811 }, 00:14:56.811 { 00:14:56.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.811 "dma_device_type": 2 00:14:56.811 } 00:14:56.811 ], 00:14:56.811 "driver_specific": {} 00:14:56.811 } 00:14:56.811 ] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.811 BaseBdev3 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.811 [ 00:14:56.811 { 00:14:56.811 "name": "BaseBdev3", 00:14:56.811 "aliases": [ 00:14:56.811 "c31810d2-ec39-44c7-999a-02c3c7687309" 00:14:56.811 ], 00:14:56.811 "product_name": "Malloc disk", 00:14:56.811 "block_size": 512, 00:14:56.811 "num_blocks": 65536, 00:14:56.811 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:14:56.811 "assigned_rate_limits": { 00:14:56.811 "rw_ios_per_sec": 0, 00:14:56.811 "rw_mbytes_per_sec": 0, 00:14:56.811 "r_mbytes_per_sec": 0, 00:14:56.811 "w_mbytes_per_sec": 0 00:14:56.811 }, 00:14:56.811 "claimed": false, 00:14:56.811 "zoned": false, 00:14:56.811 "supported_io_types": { 00:14:56.811 "read": true, 00:14:56.811 "write": true, 00:14:56.811 "unmap": true, 00:14:56.811 "flush": true, 00:14:56.811 "reset": true, 00:14:56.811 "nvme_admin": false, 00:14:56.811 "nvme_io": false, 00:14:56.811 "nvme_io_md": false, 00:14:56.811 "write_zeroes": true, 00:14:56.811 "zcopy": true, 00:14:56.811 "get_zone_info": false, 00:14:56.811 "zone_management": false, 00:14:56.811 "zone_append": false, 00:14:56.811 "compare": false, 00:14:56.811 "compare_and_write": false, 00:14:56.811 "abort": true, 00:14:56.811 "seek_hole": false, 00:14:56.811 "seek_data": false, 00:14:56.811 "copy": true, 00:14:56.811 "nvme_iov_md": false 00:14:56.811 }, 00:14:56.811 "memory_domains": [ 00:14:56.811 { 00:14:56.811 "dma_device_id": "system", 00:14:56.811 "dma_device_type": 1 00:14:56.811 }, 00:14:56.811 { 00:14:56.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.811 "dma_device_type": 2 00:14:56.811 } 00:14:56.811 ], 00:14:56.811 "driver_specific": {} 00:14:56.811 } 00:14:56.811 ] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.811 BaseBdev4 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.811 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.811 [ 00:14:56.811 { 00:14:56.811 "name": "BaseBdev4", 00:14:56.811 "aliases": [ 00:14:56.811 "20700393-9d0b-4771-b656-37849d26b514" 00:14:56.811 ], 00:14:56.811 "product_name": "Malloc disk", 00:14:56.811 "block_size": 512, 00:14:56.811 "num_blocks": 65536, 00:14:56.811 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:14:56.811 "assigned_rate_limits": { 00:14:56.811 "rw_ios_per_sec": 0, 00:14:56.811 "rw_mbytes_per_sec": 0, 00:14:56.811 "r_mbytes_per_sec": 0, 00:14:56.811 "w_mbytes_per_sec": 0 00:14:56.811 }, 00:14:56.811 "claimed": false, 00:14:56.811 "zoned": false, 00:14:56.811 "supported_io_types": { 00:14:56.811 "read": true, 00:14:56.811 "write": true, 00:14:56.811 "unmap": true, 00:14:56.811 "flush": true, 00:14:56.811 "reset": true, 00:14:56.811 "nvme_admin": false, 00:14:56.811 "nvme_io": false, 00:14:56.811 "nvme_io_md": false, 00:14:56.811 "write_zeroes": true, 00:14:56.811 "zcopy": true, 00:14:56.811 "get_zone_info": false, 00:14:56.811 "zone_management": false, 00:14:56.811 "zone_append": false, 00:14:56.811 "compare": false, 00:14:56.811 "compare_and_write": false, 00:14:56.811 "abort": true, 00:14:56.811 "seek_hole": false, 00:14:56.811 "seek_data": false, 00:14:56.811 "copy": true, 00:14:56.811 "nvme_iov_md": false 00:14:56.811 }, 00:14:56.811 "memory_domains": [ 00:14:56.811 { 00:14:56.811 "dma_device_id": "system", 00:14:56.812 "dma_device_type": 1 00:14:56.812 }, 00:14:56.812 { 00:14:56.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.812 "dma_device_type": 2 00:14:56.812 } 00:14:56.812 ], 00:14:56.812 "driver_specific": {} 00:14:56.812 } 00:14:56.812 ] 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.812 [2024-11-27 14:14:27.294604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.812 [2024-11-27 14:14:27.294953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.812 [2024-11-27 14:14:27.295096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.812 [2024-11-27 14:14:27.298073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.812 [2024-11-27 14:14:27.298266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.812 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.070 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.070 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.070 "name": "Existed_Raid", 00:14:57.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.070 "strip_size_kb": 64, 00:14:57.070 "state": "configuring", 00:14:57.070 "raid_level": "concat", 00:14:57.070 "superblock": false, 00:14:57.070 "num_base_bdevs": 4, 00:14:57.070 "num_base_bdevs_discovered": 3, 00:14:57.070 "num_base_bdevs_operational": 4, 00:14:57.070 "base_bdevs_list": [ 00:14:57.070 { 00:14:57.070 "name": "BaseBdev1", 00:14:57.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.070 "is_configured": false, 00:14:57.070 "data_offset": 0, 00:14:57.070 "data_size": 0 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "name": "BaseBdev2", 00:14:57.070 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:14:57.070 "is_configured": true, 00:14:57.070 "data_offset": 0, 00:14:57.070 "data_size": 65536 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "name": "BaseBdev3", 00:14:57.070 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:14:57.070 "is_configured": true, 00:14:57.070 "data_offset": 0, 00:14:57.070 "data_size": 65536 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "name": "BaseBdev4", 00:14:57.070 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:14:57.070 "is_configured": true, 00:14:57.070 "data_offset": 0, 00:14:57.070 "data_size": 65536 00:14:57.070 } 00:14:57.070 ] 00:14:57.070 }' 00:14:57.070 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.070 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.328 [2024-11-27 14:14:27.823018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.328 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.600 "name": "Existed_Raid", 00:14:57.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.600 "strip_size_kb": 64, 00:14:57.600 "state": "configuring", 00:14:57.600 "raid_level": "concat", 00:14:57.600 "superblock": false, 00:14:57.600 "num_base_bdevs": 4, 00:14:57.600 "num_base_bdevs_discovered": 2, 00:14:57.600 "num_base_bdevs_operational": 4, 00:14:57.600 "base_bdevs_list": [ 00:14:57.600 { 00:14:57.600 "name": "BaseBdev1", 00:14:57.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.600 "is_configured": false, 00:14:57.600 "data_offset": 0, 00:14:57.600 "data_size": 0 00:14:57.600 }, 00:14:57.600 { 00:14:57.600 "name": null, 00:14:57.600 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:14:57.600 "is_configured": false, 00:14:57.600 "data_offset": 0, 00:14:57.600 "data_size": 65536 00:14:57.600 }, 00:14:57.600 { 00:14:57.600 "name": "BaseBdev3", 00:14:57.600 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:14:57.600 "is_configured": true, 00:14:57.600 "data_offset": 0, 00:14:57.600 "data_size": 65536 00:14:57.600 }, 00:14:57.600 { 00:14:57.600 "name": "BaseBdev4", 00:14:57.600 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:14:57.600 "is_configured": true, 00:14:57.600 "data_offset": 0, 00:14:57.600 "data_size": 65536 00:14:57.600 } 00:14:57.600 ] 00:14:57.600 }' 00:14:57.600 14:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.600 14:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.876 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:57.876 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.876 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.876 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.876 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.134 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:58.134 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.134 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.134 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.134 [2024-11-27 14:14:28.449246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.134 BaseBdev1 00:14:58.134 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.135 [ 00:14:58.135 { 00:14:58.135 "name": "BaseBdev1", 00:14:58.135 "aliases": [ 00:14:58.135 "41a25b93-c67e-43f9-af02-1130e96485da" 00:14:58.135 ], 00:14:58.135 "product_name": "Malloc disk", 00:14:58.135 "block_size": 512, 00:14:58.135 "num_blocks": 65536, 00:14:58.135 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:14:58.135 "assigned_rate_limits": { 00:14:58.135 "rw_ios_per_sec": 0, 00:14:58.135 "rw_mbytes_per_sec": 0, 00:14:58.135 "r_mbytes_per_sec": 0, 00:14:58.135 "w_mbytes_per_sec": 0 00:14:58.135 }, 00:14:58.135 "claimed": true, 00:14:58.135 "claim_type": "exclusive_write", 00:14:58.135 "zoned": false, 00:14:58.135 "supported_io_types": { 00:14:58.135 "read": true, 00:14:58.135 "write": true, 00:14:58.135 "unmap": true, 00:14:58.135 "flush": true, 00:14:58.135 "reset": true, 00:14:58.135 "nvme_admin": false, 00:14:58.135 "nvme_io": false, 00:14:58.135 "nvme_io_md": false, 00:14:58.135 "write_zeroes": true, 00:14:58.135 "zcopy": true, 00:14:58.135 "get_zone_info": false, 00:14:58.135 "zone_management": false, 00:14:58.135 "zone_append": false, 00:14:58.135 "compare": false, 00:14:58.135 "compare_and_write": false, 00:14:58.135 "abort": true, 00:14:58.135 "seek_hole": false, 00:14:58.135 "seek_data": false, 00:14:58.135 "copy": true, 00:14:58.135 "nvme_iov_md": false 00:14:58.135 }, 00:14:58.135 "memory_domains": [ 00:14:58.135 { 00:14:58.135 "dma_device_id": "system", 00:14:58.135 "dma_device_type": 1 00:14:58.135 }, 00:14:58.135 { 00:14:58.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.135 "dma_device_type": 2 00:14:58.135 } 00:14:58.135 ], 00:14:58.135 "driver_specific": {} 00:14:58.135 } 00:14:58.135 ] 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.135 "name": "Existed_Raid", 00:14:58.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.135 "strip_size_kb": 64, 00:14:58.135 "state": "configuring", 00:14:58.135 "raid_level": "concat", 00:14:58.135 "superblock": false, 00:14:58.135 "num_base_bdevs": 4, 00:14:58.135 "num_base_bdevs_discovered": 3, 00:14:58.135 "num_base_bdevs_operational": 4, 00:14:58.135 "base_bdevs_list": [ 00:14:58.135 { 00:14:58.135 "name": "BaseBdev1", 00:14:58.135 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:14:58.135 "is_configured": true, 00:14:58.135 "data_offset": 0, 00:14:58.135 "data_size": 65536 00:14:58.135 }, 00:14:58.135 { 00:14:58.135 "name": null, 00:14:58.135 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:14:58.135 "is_configured": false, 00:14:58.135 "data_offset": 0, 00:14:58.135 "data_size": 65536 00:14:58.135 }, 00:14:58.135 { 00:14:58.135 "name": "BaseBdev3", 00:14:58.135 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:14:58.135 "is_configured": true, 00:14:58.135 "data_offset": 0, 00:14:58.135 "data_size": 65536 00:14:58.135 }, 00:14:58.135 { 00:14:58.135 "name": "BaseBdev4", 00:14:58.135 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:14:58.135 "is_configured": true, 00:14:58.135 "data_offset": 0, 00:14:58.135 "data_size": 65536 00:14:58.135 } 00:14:58.135 ] 00:14:58.135 }' 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.135 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.703 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.703 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.703 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.703 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.703 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.703 [2024-11-27 14:14:29.033639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.703 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.703 "name": "Existed_Raid", 00:14:58.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.703 "strip_size_kb": 64, 00:14:58.703 "state": "configuring", 00:14:58.703 "raid_level": "concat", 00:14:58.703 "superblock": false, 00:14:58.703 "num_base_bdevs": 4, 00:14:58.703 "num_base_bdevs_discovered": 2, 00:14:58.703 "num_base_bdevs_operational": 4, 00:14:58.703 "base_bdevs_list": [ 00:14:58.703 { 00:14:58.703 "name": "BaseBdev1", 00:14:58.703 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:14:58.703 "is_configured": true, 00:14:58.703 "data_offset": 0, 00:14:58.703 "data_size": 65536 00:14:58.703 }, 00:14:58.703 { 00:14:58.703 "name": null, 00:14:58.703 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:14:58.703 "is_configured": false, 00:14:58.703 "data_offset": 0, 00:14:58.703 "data_size": 65536 00:14:58.703 }, 00:14:58.703 { 00:14:58.703 "name": null, 00:14:58.703 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:14:58.703 "is_configured": false, 00:14:58.703 "data_offset": 0, 00:14:58.703 "data_size": 65536 00:14:58.703 }, 00:14:58.703 { 00:14:58.703 "name": "BaseBdev4", 00:14:58.703 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:14:58.703 "is_configured": true, 00:14:58.703 "data_offset": 0, 00:14:58.703 "data_size": 65536 00:14:58.704 } 00:14:58.704 ] 00:14:58.704 }' 00:14:58.704 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.704 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.268 [2024-11-27 14:14:29.645821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.268 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.269 "name": "Existed_Raid", 00:14:59.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.269 "strip_size_kb": 64, 00:14:59.269 "state": "configuring", 00:14:59.269 "raid_level": "concat", 00:14:59.269 "superblock": false, 00:14:59.269 "num_base_bdevs": 4, 00:14:59.269 "num_base_bdevs_discovered": 3, 00:14:59.269 "num_base_bdevs_operational": 4, 00:14:59.269 "base_bdevs_list": [ 00:14:59.269 { 00:14:59.269 "name": "BaseBdev1", 00:14:59.269 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:14:59.269 "is_configured": true, 00:14:59.269 "data_offset": 0, 00:14:59.269 "data_size": 65536 00:14:59.269 }, 00:14:59.269 { 00:14:59.269 "name": null, 00:14:59.269 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:14:59.269 "is_configured": false, 00:14:59.269 "data_offset": 0, 00:14:59.269 "data_size": 65536 00:14:59.269 }, 00:14:59.269 { 00:14:59.269 "name": "BaseBdev3", 00:14:59.269 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:14:59.269 "is_configured": true, 00:14:59.269 "data_offset": 0, 00:14:59.269 "data_size": 65536 00:14:59.269 }, 00:14:59.269 { 00:14:59.269 "name": "BaseBdev4", 00:14:59.269 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:14:59.269 "is_configured": true, 00:14:59.269 "data_offset": 0, 00:14:59.269 "data_size": 65536 00:14:59.269 } 00:14:59.269 ] 00:14:59.269 }' 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.269 14:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.835 [2024-11-27 14:14:30.190103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.835 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.092 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.092 "name": "Existed_Raid", 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "strip_size_kb": 64, 00:15:00.092 "state": "configuring", 00:15:00.092 "raid_level": "concat", 00:15:00.092 "superblock": false, 00:15:00.092 "num_base_bdevs": 4, 00:15:00.092 "num_base_bdevs_discovered": 2, 00:15:00.092 "num_base_bdevs_operational": 4, 00:15:00.092 "base_bdevs_list": [ 00:15:00.092 { 00:15:00.092 "name": null, 00:15:00.092 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 65536 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": null, 00:15:00.092 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 65536 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev3", 00:15:00.092 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:15:00.092 "is_configured": true, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 65536 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev4", 00:15:00.092 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:15:00.092 "is_configured": true, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 65536 00:15:00.092 } 00:15:00.092 ] 00:15:00.092 }' 00:15:00.092 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.092 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.350 [2024-11-27 14:14:30.852642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.350 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.609 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.609 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.609 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.609 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.609 "name": "Existed_Raid", 00:15:00.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.609 "strip_size_kb": 64, 00:15:00.609 "state": "configuring", 00:15:00.609 "raid_level": "concat", 00:15:00.609 "superblock": false, 00:15:00.609 "num_base_bdevs": 4, 00:15:00.609 "num_base_bdevs_discovered": 3, 00:15:00.609 "num_base_bdevs_operational": 4, 00:15:00.609 "base_bdevs_list": [ 00:15:00.609 { 00:15:00.609 "name": null, 00:15:00.609 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:15:00.609 "is_configured": false, 00:15:00.609 "data_offset": 0, 00:15:00.609 "data_size": 65536 00:15:00.609 }, 00:15:00.609 { 00:15:00.609 "name": "BaseBdev2", 00:15:00.609 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:15:00.609 "is_configured": true, 00:15:00.609 "data_offset": 0, 00:15:00.609 "data_size": 65536 00:15:00.609 }, 00:15:00.609 { 00:15:00.609 "name": "BaseBdev3", 00:15:00.609 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:15:00.609 "is_configured": true, 00:15:00.609 "data_offset": 0, 00:15:00.609 "data_size": 65536 00:15:00.609 }, 00:15:00.609 { 00:15:00.609 "name": "BaseBdev4", 00:15:00.609 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:15:00.609 "is_configured": true, 00:15:00.609 "data_offset": 0, 00:15:00.609 "data_size": 65536 00:15:00.609 } 00:15:00.609 ] 00:15:00.609 }' 00:15:00.609 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.609 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.867 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.867 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.867 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.867 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.867 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 41a25b93-c67e-43f9-af02-1130e96485da 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.126 [2024-11-27 14:14:31.508536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:01.126 [2024-11-27 14:14:31.508620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:01.126 [2024-11-27 14:14:31.508634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:01.126 [2024-11-27 14:14:31.509052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:01.126 NewBaseBdev 00:15:01.126 [2024-11-27 14:14:31.509284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:01.126 [2024-11-27 14:14:31.509314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:01.126 [2024-11-27 14:14:31.509653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.126 [ 00:15:01.126 { 00:15:01.126 "name": "NewBaseBdev", 00:15:01.126 "aliases": [ 00:15:01.126 "41a25b93-c67e-43f9-af02-1130e96485da" 00:15:01.126 ], 00:15:01.126 "product_name": "Malloc disk", 00:15:01.126 "block_size": 512, 00:15:01.126 "num_blocks": 65536, 00:15:01.126 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:15:01.126 "assigned_rate_limits": { 00:15:01.126 "rw_ios_per_sec": 0, 00:15:01.126 "rw_mbytes_per_sec": 0, 00:15:01.126 "r_mbytes_per_sec": 0, 00:15:01.126 "w_mbytes_per_sec": 0 00:15:01.126 }, 00:15:01.126 "claimed": true, 00:15:01.126 "claim_type": "exclusive_write", 00:15:01.126 "zoned": false, 00:15:01.126 "supported_io_types": { 00:15:01.126 "read": true, 00:15:01.126 "write": true, 00:15:01.126 "unmap": true, 00:15:01.126 "flush": true, 00:15:01.126 "reset": true, 00:15:01.126 "nvme_admin": false, 00:15:01.126 "nvme_io": false, 00:15:01.126 "nvme_io_md": false, 00:15:01.126 "write_zeroes": true, 00:15:01.126 "zcopy": true, 00:15:01.126 "get_zone_info": false, 00:15:01.126 "zone_management": false, 00:15:01.126 "zone_append": false, 00:15:01.126 "compare": false, 00:15:01.126 "compare_and_write": false, 00:15:01.126 "abort": true, 00:15:01.126 "seek_hole": false, 00:15:01.126 "seek_data": false, 00:15:01.126 "copy": true, 00:15:01.126 "nvme_iov_md": false 00:15:01.126 }, 00:15:01.126 "memory_domains": [ 00:15:01.126 { 00:15:01.126 "dma_device_id": "system", 00:15:01.126 "dma_device_type": 1 00:15:01.126 }, 00:15:01.126 { 00:15:01.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.126 "dma_device_type": 2 00:15:01.126 } 00:15:01.126 ], 00:15:01.126 "driver_specific": {} 00:15:01.126 } 00:15:01.126 ] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.126 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.127 "name": "Existed_Raid", 00:15:01.127 "uuid": "a78302cd-ead0-45aa-b8bd-e3e1f4bdc1ca", 00:15:01.127 "strip_size_kb": 64, 00:15:01.127 "state": "online", 00:15:01.127 "raid_level": "concat", 00:15:01.127 "superblock": false, 00:15:01.127 "num_base_bdevs": 4, 00:15:01.127 "num_base_bdevs_discovered": 4, 00:15:01.127 "num_base_bdevs_operational": 4, 00:15:01.127 "base_bdevs_list": [ 00:15:01.127 { 00:15:01.127 "name": "NewBaseBdev", 00:15:01.127 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:15:01.127 "is_configured": true, 00:15:01.127 "data_offset": 0, 00:15:01.127 "data_size": 65536 00:15:01.127 }, 00:15:01.127 { 00:15:01.127 "name": "BaseBdev2", 00:15:01.127 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:15:01.127 "is_configured": true, 00:15:01.127 "data_offset": 0, 00:15:01.127 "data_size": 65536 00:15:01.127 }, 00:15:01.127 { 00:15:01.127 "name": "BaseBdev3", 00:15:01.127 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:15:01.127 "is_configured": true, 00:15:01.127 "data_offset": 0, 00:15:01.127 "data_size": 65536 00:15:01.127 }, 00:15:01.127 { 00:15:01.127 "name": "BaseBdev4", 00:15:01.127 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:15:01.127 "is_configured": true, 00:15:01.127 "data_offset": 0, 00:15:01.127 "data_size": 65536 00:15:01.127 } 00:15:01.127 ] 00:15:01.127 }' 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.127 14:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.747 [2024-11-27 14:14:32.069377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.747 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.747 "name": "Existed_Raid", 00:15:01.747 "aliases": [ 00:15:01.747 "a78302cd-ead0-45aa-b8bd-e3e1f4bdc1ca" 00:15:01.747 ], 00:15:01.747 "product_name": "Raid Volume", 00:15:01.747 "block_size": 512, 00:15:01.747 "num_blocks": 262144, 00:15:01.747 "uuid": "a78302cd-ead0-45aa-b8bd-e3e1f4bdc1ca", 00:15:01.747 "assigned_rate_limits": { 00:15:01.747 "rw_ios_per_sec": 0, 00:15:01.747 "rw_mbytes_per_sec": 0, 00:15:01.747 "r_mbytes_per_sec": 0, 00:15:01.747 "w_mbytes_per_sec": 0 00:15:01.747 }, 00:15:01.747 "claimed": false, 00:15:01.747 "zoned": false, 00:15:01.747 "supported_io_types": { 00:15:01.748 "read": true, 00:15:01.748 "write": true, 00:15:01.748 "unmap": true, 00:15:01.748 "flush": true, 00:15:01.748 "reset": true, 00:15:01.748 "nvme_admin": false, 00:15:01.748 "nvme_io": false, 00:15:01.748 "nvme_io_md": false, 00:15:01.748 "write_zeroes": true, 00:15:01.748 "zcopy": false, 00:15:01.748 "get_zone_info": false, 00:15:01.748 "zone_management": false, 00:15:01.748 "zone_append": false, 00:15:01.748 "compare": false, 00:15:01.748 "compare_and_write": false, 00:15:01.748 "abort": false, 00:15:01.748 "seek_hole": false, 00:15:01.748 "seek_data": false, 00:15:01.748 "copy": false, 00:15:01.748 "nvme_iov_md": false 00:15:01.748 }, 00:15:01.748 "memory_domains": [ 00:15:01.748 { 00:15:01.748 "dma_device_id": "system", 00:15:01.748 "dma_device_type": 1 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.748 "dma_device_type": 2 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "system", 00:15:01.748 "dma_device_type": 1 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.748 "dma_device_type": 2 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "system", 00:15:01.748 "dma_device_type": 1 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.748 "dma_device_type": 2 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "system", 00:15:01.748 "dma_device_type": 1 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.748 "dma_device_type": 2 00:15:01.748 } 00:15:01.748 ], 00:15:01.748 "driver_specific": { 00:15:01.748 "raid": { 00:15:01.748 "uuid": "a78302cd-ead0-45aa-b8bd-e3e1f4bdc1ca", 00:15:01.748 "strip_size_kb": 64, 00:15:01.748 "state": "online", 00:15:01.748 "raid_level": "concat", 00:15:01.748 "superblock": false, 00:15:01.748 "num_base_bdevs": 4, 00:15:01.748 "num_base_bdevs_discovered": 4, 00:15:01.748 "num_base_bdevs_operational": 4, 00:15:01.748 "base_bdevs_list": [ 00:15:01.748 { 00:15:01.748 "name": "NewBaseBdev", 00:15:01.748 "uuid": "41a25b93-c67e-43f9-af02-1130e96485da", 00:15:01.748 "is_configured": true, 00:15:01.748 "data_offset": 0, 00:15:01.748 "data_size": 65536 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "name": "BaseBdev2", 00:15:01.748 "uuid": "b10d8d73-8864-4c1b-a611-24c17125d93f", 00:15:01.748 "is_configured": true, 00:15:01.748 "data_offset": 0, 00:15:01.748 "data_size": 65536 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "name": "BaseBdev3", 00:15:01.748 "uuid": "c31810d2-ec39-44c7-999a-02c3c7687309", 00:15:01.748 "is_configured": true, 00:15:01.748 "data_offset": 0, 00:15:01.748 "data_size": 65536 00:15:01.748 }, 00:15:01.748 { 00:15:01.748 "name": "BaseBdev4", 00:15:01.748 "uuid": "20700393-9d0b-4771-b656-37849d26b514", 00:15:01.748 "is_configured": true, 00:15:01.748 "data_offset": 0, 00:15:01.748 "data_size": 65536 00:15:01.748 } 00:15:01.748 ] 00:15:01.748 } 00:15:01.748 } 00:15:01.748 }' 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:01.748 BaseBdev2 00:15:01.748 BaseBdev3 00:15:01.748 BaseBdev4' 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.748 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.007 [2024-11-27 14:14:32.436962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.007 [2024-11-27 14:14:32.437317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.007 [2024-11-27 14:14:32.437594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.007 [2024-11-27 14:14:32.437827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.007 [2024-11-27 14:14:32.437886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71582 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71582 ']' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71582 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71582 00:15:02.007 killing process with pid 71582 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71582' 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71582 00:15:02.007 [2024-11-27 14:14:32.473035] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.007 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71582 00:15:02.575 [2024-11-27 14:14:32.874082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.510 14:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:03.510 00:15:03.510 real 0m12.957s 00:15:03.510 user 0m21.274s 00:15:03.510 sys 0m1.770s 00:15:03.510 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.510 ************************************ 00:15:03.510 END TEST raid_state_function_test 00:15:03.510 ************************************ 00:15:03.510 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.769 14:14:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:03.769 14:14:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:03.769 14:14:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.769 14:14:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.769 ************************************ 00:15:03.769 START TEST raid_state_function_test_sb 00:15:03.769 ************************************ 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:03.769 Process raid pid: 72263 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72263 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72263' 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72263 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72263 ']' 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.769 14:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.769 [2024-11-27 14:14:34.162108] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:03.769 [2024-11-27 14:14:34.162260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.027 [2024-11-27 14:14:34.334684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.027 [2024-11-27 14:14:34.472750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.285 [2024-11-27 14:14:34.691487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.285 [2024-11-27 14:14:34.691758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.853 [2024-11-27 14:14:35.136152] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.853 [2024-11-27 14:14:35.136380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.853 [2024-11-27 14:14:35.136511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.853 [2024-11-27 14:14:35.136679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.853 [2024-11-27 14:14:35.136797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.853 [2024-11-27 14:14:35.136956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.853 [2024-11-27 14:14:35.137110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:04.853 [2024-11-27 14:14:35.137196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.853 "name": "Existed_Raid", 00:15:04.853 "uuid": "b4f245c8-62ac-44ac-9c0d-2097d06ccd9c", 00:15:04.853 "strip_size_kb": 64, 00:15:04.853 "state": "configuring", 00:15:04.853 "raid_level": "concat", 00:15:04.853 "superblock": true, 00:15:04.853 "num_base_bdevs": 4, 00:15:04.853 "num_base_bdevs_discovered": 0, 00:15:04.853 "num_base_bdevs_operational": 4, 00:15:04.853 "base_bdevs_list": [ 00:15:04.853 { 00:15:04.853 "name": "BaseBdev1", 00:15:04.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.853 "is_configured": false, 00:15:04.853 "data_offset": 0, 00:15:04.853 "data_size": 0 00:15:04.853 }, 00:15:04.853 { 00:15:04.853 "name": "BaseBdev2", 00:15:04.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.853 "is_configured": false, 00:15:04.853 "data_offset": 0, 00:15:04.853 "data_size": 0 00:15:04.853 }, 00:15:04.853 { 00:15:04.853 "name": "BaseBdev3", 00:15:04.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.853 "is_configured": false, 00:15:04.853 "data_offset": 0, 00:15:04.853 "data_size": 0 00:15:04.853 }, 00:15:04.853 { 00:15:04.853 "name": "BaseBdev4", 00:15:04.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.853 "is_configured": false, 00:15:04.853 "data_offset": 0, 00:15:04.853 "data_size": 0 00:15:04.853 } 00:15:04.853 ] 00:15:04.853 }' 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.853 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 [2024-11-27 14:14:35.656341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.421 [2024-11-27 14:14:35.656707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 [2024-11-27 14:14:35.664323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.421 [2024-11-27 14:14:35.664552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.421 [2024-11-27 14:14:35.664674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.421 [2024-11-27 14:14:35.664827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.421 [2024-11-27 14:14:35.664963] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.421 [2024-11-27 14:14:35.665022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.421 [2024-11-27 14:14:35.665125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:05.421 [2024-11-27 14:14:35.665279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 [2024-11-27 14:14:35.713175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.421 BaseBdev1 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.421 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.422 [ 00:15:05.422 { 00:15:05.422 "name": "BaseBdev1", 00:15:05.422 "aliases": [ 00:15:05.422 "ccb5e946-11a0-41b2-8668-f727c4dd2b6d" 00:15:05.422 ], 00:15:05.422 "product_name": "Malloc disk", 00:15:05.422 "block_size": 512, 00:15:05.422 "num_blocks": 65536, 00:15:05.422 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:05.422 "assigned_rate_limits": { 00:15:05.422 "rw_ios_per_sec": 0, 00:15:05.422 "rw_mbytes_per_sec": 0, 00:15:05.422 "r_mbytes_per_sec": 0, 00:15:05.422 "w_mbytes_per_sec": 0 00:15:05.422 }, 00:15:05.422 "claimed": true, 00:15:05.422 "claim_type": "exclusive_write", 00:15:05.422 "zoned": false, 00:15:05.422 "supported_io_types": { 00:15:05.422 "read": true, 00:15:05.422 "write": true, 00:15:05.422 "unmap": true, 00:15:05.422 "flush": true, 00:15:05.422 "reset": true, 00:15:05.422 "nvme_admin": false, 00:15:05.422 "nvme_io": false, 00:15:05.422 "nvme_io_md": false, 00:15:05.422 "write_zeroes": true, 00:15:05.422 "zcopy": true, 00:15:05.422 "get_zone_info": false, 00:15:05.422 "zone_management": false, 00:15:05.422 "zone_append": false, 00:15:05.422 "compare": false, 00:15:05.422 "compare_and_write": false, 00:15:05.422 "abort": true, 00:15:05.422 "seek_hole": false, 00:15:05.422 "seek_data": false, 00:15:05.422 "copy": true, 00:15:05.422 "nvme_iov_md": false 00:15:05.422 }, 00:15:05.422 "memory_domains": [ 00:15:05.422 { 00:15:05.422 "dma_device_id": "system", 00:15:05.422 "dma_device_type": 1 00:15:05.422 }, 00:15:05.422 { 00:15:05.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.422 "dma_device_type": 2 00:15:05.422 } 00:15:05.422 ], 00:15:05.422 "driver_specific": {} 00:15:05.422 } 00:15:05.422 ] 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.422 "name": "Existed_Raid", 00:15:05.422 "uuid": "e1e45dd8-553a-45fd-8ad8-7deeadc1675f", 00:15:05.422 "strip_size_kb": 64, 00:15:05.422 "state": "configuring", 00:15:05.422 "raid_level": "concat", 00:15:05.422 "superblock": true, 00:15:05.422 "num_base_bdevs": 4, 00:15:05.422 "num_base_bdevs_discovered": 1, 00:15:05.422 "num_base_bdevs_operational": 4, 00:15:05.422 "base_bdevs_list": [ 00:15:05.422 { 00:15:05.422 "name": "BaseBdev1", 00:15:05.422 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:05.422 "is_configured": true, 00:15:05.422 "data_offset": 2048, 00:15:05.422 "data_size": 63488 00:15:05.422 }, 00:15:05.422 { 00:15:05.422 "name": "BaseBdev2", 00:15:05.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.422 "is_configured": false, 00:15:05.422 "data_offset": 0, 00:15:05.422 "data_size": 0 00:15:05.422 }, 00:15:05.422 { 00:15:05.422 "name": "BaseBdev3", 00:15:05.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.422 "is_configured": false, 00:15:05.422 "data_offset": 0, 00:15:05.422 "data_size": 0 00:15:05.422 }, 00:15:05.422 { 00:15:05.422 "name": "BaseBdev4", 00:15:05.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.422 "is_configured": false, 00:15:05.422 "data_offset": 0, 00:15:05.422 "data_size": 0 00:15:05.422 } 00:15:05.422 ] 00:15:05.422 }' 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.422 14:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 [2024-11-27 14:14:36.249578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.991 [2024-11-27 14:14:36.250162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.991 [2024-11-27 14:14:36.261550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.991 [2024-11-27 14:14:36.265436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.991 [2024-11-27 14:14:36.265695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.991 [2024-11-27 14:14:36.265923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.991 [2024-11-27 14:14:36.266163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.991 [2024-11-27 14:14:36.266205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:05.991 [2024-11-27 14:14:36.266237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:05.991 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.992 "name": "Existed_Raid", 00:15:05.992 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:05.992 "strip_size_kb": 64, 00:15:05.992 "state": "configuring", 00:15:05.992 "raid_level": "concat", 00:15:05.992 "superblock": true, 00:15:05.992 "num_base_bdevs": 4, 00:15:05.992 "num_base_bdevs_discovered": 1, 00:15:05.992 "num_base_bdevs_operational": 4, 00:15:05.992 "base_bdevs_list": [ 00:15:05.992 { 00:15:05.992 "name": "BaseBdev1", 00:15:05.992 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:05.992 "is_configured": true, 00:15:05.992 "data_offset": 2048, 00:15:05.992 "data_size": 63488 00:15:05.992 }, 00:15:05.992 { 00:15:05.992 "name": "BaseBdev2", 00:15:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.992 "is_configured": false, 00:15:05.992 "data_offset": 0, 00:15:05.992 "data_size": 0 00:15:05.992 }, 00:15:05.992 { 00:15:05.992 "name": "BaseBdev3", 00:15:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.992 "is_configured": false, 00:15:05.992 "data_offset": 0, 00:15:05.992 "data_size": 0 00:15:05.992 }, 00:15:05.992 { 00:15:05.992 "name": "BaseBdev4", 00:15:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.992 "is_configured": false, 00:15:05.992 "data_offset": 0, 00:15:05.992 "data_size": 0 00:15:05.992 } 00:15:05.992 ] 00:15:05.992 }' 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.992 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.251 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:06.251 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.251 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.512 [2024-11-27 14:14:36.784209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.512 BaseBdev2 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.512 [ 00:15:06.512 { 00:15:06.512 "name": "BaseBdev2", 00:15:06.512 "aliases": [ 00:15:06.512 "92a41c1b-5798-4df4-a181-5021230f73c4" 00:15:06.512 ], 00:15:06.512 "product_name": "Malloc disk", 00:15:06.512 "block_size": 512, 00:15:06.512 "num_blocks": 65536, 00:15:06.512 "uuid": "92a41c1b-5798-4df4-a181-5021230f73c4", 00:15:06.512 "assigned_rate_limits": { 00:15:06.512 "rw_ios_per_sec": 0, 00:15:06.512 "rw_mbytes_per_sec": 0, 00:15:06.512 "r_mbytes_per_sec": 0, 00:15:06.512 "w_mbytes_per_sec": 0 00:15:06.512 }, 00:15:06.512 "claimed": true, 00:15:06.512 "claim_type": "exclusive_write", 00:15:06.512 "zoned": false, 00:15:06.512 "supported_io_types": { 00:15:06.512 "read": true, 00:15:06.512 "write": true, 00:15:06.512 "unmap": true, 00:15:06.512 "flush": true, 00:15:06.512 "reset": true, 00:15:06.512 "nvme_admin": false, 00:15:06.512 "nvme_io": false, 00:15:06.512 "nvme_io_md": false, 00:15:06.512 "write_zeroes": true, 00:15:06.512 "zcopy": true, 00:15:06.512 "get_zone_info": false, 00:15:06.512 "zone_management": false, 00:15:06.512 "zone_append": false, 00:15:06.512 "compare": false, 00:15:06.512 "compare_and_write": false, 00:15:06.512 "abort": true, 00:15:06.512 "seek_hole": false, 00:15:06.512 "seek_data": false, 00:15:06.512 "copy": true, 00:15:06.512 "nvme_iov_md": false 00:15:06.512 }, 00:15:06.512 "memory_domains": [ 00:15:06.512 { 00:15:06.512 "dma_device_id": "system", 00:15:06.512 "dma_device_type": 1 00:15:06.512 }, 00:15:06.512 { 00:15:06.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.512 "dma_device_type": 2 00:15:06.512 } 00:15:06.512 ], 00:15:06.512 "driver_specific": {} 00:15:06.512 } 00:15:06.512 ] 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.512 "name": "Existed_Raid", 00:15:06.512 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:06.512 "strip_size_kb": 64, 00:15:06.512 "state": "configuring", 00:15:06.512 "raid_level": "concat", 00:15:06.512 "superblock": true, 00:15:06.512 "num_base_bdevs": 4, 00:15:06.512 "num_base_bdevs_discovered": 2, 00:15:06.512 "num_base_bdevs_operational": 4, 00:15:06.512 "base_bdevs_list": [ 00:15:06.512 { 00:15:06.512 "name": "BaseBdev1", 00:15:06.512 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:06.512 "is_configured": true, 00:15:06.512 "data_offset": 2048, 00:15:06.512 "data_size": 63488 00:15:06.512 }, 00:15:06.512 { 00:15:06.512 "name": "BaseBdev2", 00:15:06.512 "uuid": "92a41c1b-5798-4df4-a181-5021230f73c4", 00:15:06.512 "is_configured": true, 00:15:06.512 "data_offset": 2048, 00:15:06.512 "data_size": 63488 00:15:06.512 }, 00:15:06.512 { 00:15:06.512 "name": "BaseBdev3", 00:15:06.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.512 "is_configured": false, 00:15:06.512 "data_offset": 0, 00:15:06.512 "data_size": 0 00:15:06.512 }, 00:15:06.512 { 00:15:06.512 "name": "BaseBdev4", 00:15:06.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.512 "is_configured": false, 00:15:06.512 "data_offset": 0, 00:15:06.512 "data_size": 0 00:15:06.512 } 00:15:06.512 ] 00:15:06.512 }' 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.512 14:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.078 [2024-11-27 14:14:37.407532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.078 BaseBdev3 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.078 [ 00:15:07.078 { 00:15:07.078 "name": "BaseBdev3", 00:15:07.078 "aliases": [ 00:15:07.078 "fa31867d-85da-409d-b2b3-7e6eaa46e917" 00:15:07.078 ], 00:15:07.078 "product_name": "Malloc disk", 00:15:07.078 "block_size": 512, 00:15:07.078 "num_blocks": 65536, 00:15:07.078 "uuid": "fa31867d-85da-409d-b2b3-7e6eaa46e917", 00:15:07.078 "assigned_rate_limits": { 00:15:07.078 "rw_ios_per_sec": 0, 00:15:07.078 "rw_mbytes_per_sec": 0, 00:15:07.078 "r_mbytes_per_sec": 0, 00:15:07.078 "w_mbytes_per_sec": 0 00:15:07.078 }, 00:15:07.078 "claimed": true, 00:15:07.078 "claim_type": "exclusive_write", 00:15:07.078 "zoned": false, 00:15:07.078 "supported_io_types": { 00:15:07.078 "read": true, 00:15:07.078 "write": true, 00:15:07.078 "unmap": true, 00:15:07.078 "flush": true, 00:15:07.078 "reset": true, 00:15:07.078 "nvme_admin": false, 00:15:07.078 "nvme_io": false, 00:15:07.078 "nvme_io_md": false, 00:15:07.078 "write_zeroes": true, 00:15:07.078 "zcopy": true, 00:15:07.078 "get_zone_info": false, 00:15:07.078 "zone_management": false, 00:15:07.078 "zone_append": false, 00:15:07.078 "compare": false, 00:15:07.078 "compare_and_write": false, 00:15:07.078 "abort": true, 00:15:07.078 "seek_hole": false, 00:15:07.078 "seek_data": false, 00:15:07.078 "copy": true, 00:15:07.078 "nvme_iov_md": false 00:15:07.078 }, 00:15:07.078 "memory_domains": [ 00:15:07.078 { 00:15:07.078 "dma_device_id": "system", 00:15:07.078 "dma_device_type": 1 00:15:07.078 }, 00:15:07.078 { 00:15:07.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.078 "dma_device_type": 2 00:15:07.078 } 00:15:07.078 ], 00:15:07.078 "driver_specific": {} 00:15:07.078 } 00:15:07.078 ] 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:07.078 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.079 "name": "Existed_Raid", 00:15:07.079 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:07.079 "strip_size_kb": 64, 00:15:07.079 "state": "configuring", 00:15:07.079 "raid_level": "concat", 00:15:07.079 "superblock": true, 00:15:07.079 "num_base_bdevs": 4, 00:15:07.079 "num_base_bdevs_discovered": 3, 00:15:07.079 "num_base_bdevs_operational": 4, 00:15:07.079 "base_bdevs_list": [ 00:15:07.079 { 00:15:07.079 "name": "BaseBdev1", 00:15:07.079 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:07.079 "is_configured": true, 00:15:07.079 "data_offset": 2048, 00:15:07.079 "data_size": 63488 00:15:07.079 }, 00:15:07.079 { 00:15:07.079 "name": "BaseBdev2", 00:15:07.079 "uuid": "92a41c1b-5798-4df4-a181-5021230f73c4", 00:15:07.079 "is_configured": true, 00:15:07.079 "data_offset": 2048, 00:15:07.079 "data_size": 63488 00:15:07.079 }, 00:15:07.079 { 00:15:07.079 "name": "BaseBdev3", 00:15:07.079 "uuid": "fa31867d-85da-409d-b2b3-7e6eaa46e917", 00:15:07.079 "is_configured": true, 00:15:07.079 "data_offset": 2048, 00:15:07.079 "data_size": 63488 00:15:07.079 }, 00:15:07.079 { 00:15:07.079 "name": "BaseBdev4", 00:15:07.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.079 "is_configured": false, 00:15:07.079 "data_offset": 0, 00:15:07.079 "data_size": 0 00:15:07.079 } 00:15:07.079 ] 00:15:07.079 }' 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.079 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 [2024-11-27 14:14:37.980587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.646 [2024-11-27 14:14:37.980952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:07.646 [2024-11-27 14:14:37.980971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.646 BaseBdev4 00:15:07.646 [2024-11-27 14:14:37.981286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:07.646 [2024-11-27 14:14:37.981500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:07.646 [2024-11-27 14:14:37.981520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.646 [2024-11-27 14:14:37.981735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 14:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 [ 00:15:07.646 { 00:15:07.646 "name": "BaseBdev4", 00:15:07.646 "aliases": [ 00:15:07.646 "873cdc0e-16fd-412b-b2d4-9950388411e7" 00:15:07.646 ], 00:15:07.646 "product_name": "Malloc disk", 00:15:07.646 "block_size": 512, 00:15:07.646 "num_blocks": 65536, 00:15:07.646 "uuid": "873cdc0e-16fd-412b-b2d4-9950388411e7", 00:15:07.646 "assigned_rate_limits": { 00:15:07.646 "rw_ios_per_sec": 0, 00:15:07.646 "rw_mbytes_per_sec": 0, 00:15:07.646 "r_mbytes_per_sec": 0, 00:15:07.646 "w_mbytes_per_sec": 0 00:15:07.646 }, 00:15:07.646 "claimed": true, 00:15:07.646 "claim_type": "exclusive_write", 00:15:07.646 "zoned": false, 00:15:07.646 "supported_io_types": { 00:15:07.646 "read": true, 00:15:07.646 "write": true, 00:15:07.646 "unmap": true, 00:15:07.646 "flush": true, 00:15:07.646 "reset": true, 00:15:07.646 "nvme_admin": false, 00:15:07.646 "nvme_io": false, 00:15:07.646 "nvme_io_md": false, 00:15:07.646 "write_zeroes": true, 00:15:07.646 "zcopy": true, 00:15:07.646 "get_zone_info": false, 00:15:07.646 "zone_management": false, 00:15:07.646 "zone_append": false, 00:15:07.646 "compare": false, 00:15:07.646 "compare_and_write": false, 00:15:07.646 "abort": true, 00:15:07.646 "seek_hole": false, 00:15:07.646 "seek_data": false, 00:15:07.646 "copy": true, 00:15:07.646 "nvme_iov_md": false 00:15:07.646 }, 00:15:07.646 "memory_domains": [ 00:15:07.646 { 00:15:07.646 "dma_device_id": "system", 00:15:07.646 "dma_device_type": 1 00:15:07.646 }, 00:15:07.646 { 00:15:07.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.646 "dma_device_type": 2 00:15:07.646 } 00:15:07.646 ], 00:15:07.646 "driver_specific": {} 00:15:07.646 } 00:15:07.646 ] 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.646 "name": "Existed_Raid", 00:15:07.646 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:07.646 "strip_size_kb": 64, 00:15:07.646 "state": "online", 00:15:07.646 "raid_level": "concat", 00:15:07.646 "superblock": true, 00:15:07.646 "num_base_bdevs": 4, 00:15:07.646 "num_base_bdevs_discovered": 4, 00:15:07.646 "num_base_bdevs_operational": 4, 00:15:07.646 "base_bdevs_list": [ 00:15:07.646 { 00:15:07.646 "name": "BaseBdev1", 00:15:07.646 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:07.646 "is_configured": true, 00:15:07.646 "data_offset": 2048, 00:15:07.646 "data_size": 63488 00:15:07.646 }, 00:15:07.646 { 00:15:07.646 "name": "BaseBdev2", 00:15:07.646 "uuid": "92a41c1b-5798-4df4-a181-5021230f73c4", 00:15:07.646 "is_configured": true, 00:15:07.646 "data_offset": 2048, 00:15:07.646 "data_size": 63488 00:15:07.646 }, 00:15:07.646 { 00:15:07.646 "name": "BaseBdev3", 00:15:07.646 "uuid": "fa31867d-85da-409d-b2b3-7e6eaa46e917", 00:15:07.646 "is_configured": true, 00:15:07.646 "data_offset": 2048, 00:15:07.646 "data_size": 63488 00:15:07.646 }, 00:15:07.646 { 00:15:07.646 "name": "BaseBdev4", 00:15:07.646 "uuid": "873cdc0e-16fd-412b-b2d4-9950388411e7", 00:15:07.646 "is_configured": true, 00:15:07.646 "data_offset": 2048, 00:15:07.646 "data_size": 63488 00:15:07.646 } 00:15:07.646 ] 00:15:07.646 }' 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.646 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.214 [2024-11-27 14:14:38.537411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.214 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.214 "name": "Existed_Raid", 00:15:08.214 "aliases": [ 00:15:08.214 "9dddbca0-3855-431b-907b-f406431dc066" 00:15:08.214 ], 00:15:08.214 "product_name": "Raid Volume", 00:15:08.214 "block_size": 512, 00:15:08.214 "num_blocks": 253952, 00:15:08.214 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:08.214 "assigned_rate_limits": { 00:15:08.214 "rw_ios_per_sec": 0, 00:15:08.214 "rw_mbytes_per_sec": 0, 00:15:08.214 "r_mbytes_per_sec": 0, 00:15:08.214 "w_mbytes_per_sec": 0 00:15:08.214 }, 00:15:08.214 "claimed": false, 00:15:08.214 "zoned": false, 00:15:08.214 "supported_io_types": { 00:15:08.214 "read": true, 00:15:08.214 "write": true, 00:15:08.214 "unmap": true, 00:15:08.214 "flush": true, 00:15:08.214 "reset": true, 00:15:08.214 "nvme_admin": false, 00:15:08.214 "nvme_io": false, 00:15:08.214 "nvme_io_md": false, 00:15:08.214 "write_zeroes": true, 00:15:08.214 "zcopy": false, 00:15:08.214 "get_zone_info": false, 00:15:08.214 "zone_management": false, 00:15:08.214 "zone_append": false, 00:15:08.214 "compare": false, 00:15:08.214 "compare_and_write": false, 00:15:08.214 "abort": false, 00:15:08.214 "seek_hole": false, 00:15:08.214 "seek_data": false, 00:15:08.214 "copy": false, 00:15:08.214 "nvme_iov_md": false 00:15:08.214 }, 00:15:08.214 "memory_domains": [ 00:15:08.214 { 00:15:08.214 "dma_device_id": "system", 00:15:08.214 "dma_device_type": 1 00:15:08.214 }, 00:15:08.214 { 00:15:08.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.214 "dma_device_type": 2 00:15:08.214 }, 00:15:08.214 { 00:15:08.214 "dma_device_id": "system", 00:15:08.215 "dma_device_type": 1 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.215 "dma_device_type": 2 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "dma_device_id": "system", 00:15:08.215 "dma_device_type": 1 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.215 "dma_device_type": 2 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "dma_device_id": "system", 00:15:08.215 "dma_device_type": 1 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.215 "dma_device_type": 2 00:15:08.215 } 00:15:08.215 ], 00:15:08.215 "driver_specific": { 00:15:08.215 "raid": { 00:15:08.215 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:08.215 "strip_size_kb": 64, 00:15:08.215 "state": "online", 00:15:08.215 "raid_level": "concat", 00:15:08.215 "superblock": true, 00:15:08.215 "num_base_bdevs": 4, 00:15:08.215 "num_base_bdevs_discovered": 4, 00:15:08.215 "num_base_bdevs_operational": 4, 00:15:08.215 "base_bdevs_list": [ 00:15:08.215 { 00:15:08.215 "name": "BaseBdev1", 00:15:08.215 "uuid": "ccb5e946-11a0-41b2-8668-f727c4dd2b6d", 00:15:08.215 "is_configured": true, 00:15:08.215 "data_offset": 2048, 00:15:08.215 "data_size": 63488 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "name": "BaseBdev2", 00:15:08.215 "uuid": "92a41c1b-5798-4df4-a181-5021230f73c4", 00:15:08.215 "is_configured": true, 00:15:08.215 "data_offset": 2048, 00:15:08.215 "data_size": 63488 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "name": "BaseBdev3", 00:15:08.215 "uuid": "fa31867d-85da-409d-b2b3-7e6eaa46e917", 00:15:08.215 "is_configured": true, 00:15:08.215 "data_offset": 2048, 00:15:08.215 "data_size": 63488 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "name": "BaseBdev4", 00:15:08.215 "uuid": "873cdc0e-16fd-412b-b2d4-9950388411e7", 00:15:08.215 "is_configured": true, 00:15:08.215 "data_offset": 2048, 00:15:08.215 "data_size": 63488 00:15:08.215 } 00:15:08.215 ] 00:15:08.215 } 00:15:08.215 } 00:15:08.215 }' 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:08.215 BaseBdev2 00:15:08.215 BaseBdev3 00:15:08.215 BaseBdev4' 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.215 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.474 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.474 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.474 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.474 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.475 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.475 [2024-11-27 14:14:38.904987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.475 [2024-11-27 14:14:38.905159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.475 [2024-11-27 14:14:38.905366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.734 14:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.734 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.734 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.734 "name": "Existed_Raid", 00:15:08.734 "uuid": "9dddbca0-3855-431b-907b-f406431dc066", 00:15:08.734 "strip_size_kb": 64, 00:15:08.734 "state": "offline", 00:15:08.734 "raid_level": "concat", 00:15:08.734 "superblock": true, 00:15:08.734 "num_base_bdevs": 4, 00:15:08.734 "num_base_bdevs_discovered": 3, 00:15:08.734 "num_base_bdevs_operational": 3, 00:15:08.734 "base_bdevs_list": [ 00:15:08.734 { 00:15:08.734 "name": null, 00:15:08.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.734 "is_configured": false, 00:15:08.734 "data_offset": 0, 00:15:08.734 "data_size": 63488 00:15:08.734 }, 00:15:08.734 { 00:15:08.734 "name": "BaseBdev2", 00:15:08.734 "uuid": "92a41c1b-5798-4df4-a181-5021230f73c4", 00:15:08.734 "is_configured": true, 00:15:08.734 "data_offset": 2048, 00:15:08.734 "data_size": 63488 00:15:08.734 }, 00:15:08.734 { 00:15:08.734 "name": "BaseBdev3", 00:15:08.734 "uuid": "fa31867d-85da-409d-b2b3-7e6eaa46e917", 00:15:08.734 "is_configured": true, 00:15:08.734 "data_offset": 2048, 00:15:08.734 "data_size": 63488 00:15:08.734 }, 00:15:08.734 { 00:15:08.734 "name": "BaseBdev4", 00:15:08.734 "uuid": "873cdc0e-16fd-412b-b2d4-9950388411e7", 00:15:08.734 "is_configured": true, 00:15:08.734 "data_offset": 2048, 00:15:08.734 "data_size": 63488 00:15:08.734 } 00:15:08.734 ] 00:15:08.734 }' 00:15:08.734 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.734 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.992 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:08.992 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:08.992 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.992 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.992 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:08.992 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.250 [2024-11-27 14:14:39.546966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.250 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.250 [2024-11-27 14:14:39.696533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.509 [2024-11-27 14:14:39.841911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:09.509 [2024-11-27 14:14:39.842103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.509 14:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.769 BaseBdev2 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.769 [ 00:15:09.769 { 00:15:09.769 "name": "BaseBdev2", 00:15:09.769 "aliases": [ 00:15:09.769 "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb" 00:15:09.769 ], 00:15:09.769 "product_name": "Malloc disk", 00:15:09.769 "block_size": 512, 00:15:09.769 "num_blocks": 65536, 00:15:09.769 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:09.769 "assigned_rate_limits": { 00:15:09.769 "rw_ios_per_sec": 0, 00:15:09.769 "rw_mbytes_per_sec": 0, 00:15:09.769 "r_mbytes_per_sec": 0, 00:15:09.769 "w_mbytes_per_sec": 0 00:15:09.769 }, 00:15:09.769 "claimed": false, 00:15:09.769 "zoned": false, 00:15:09.769 "supported_io_types": { 00:15:09.769 "read": true, 00:15:09.769 "write": true, 00:15:09.769 "unmap": true, 00:15:09.769 "flush": true, 00:15:09.769 "reset": true, 00:15:09.769 "nvme_admin": false, 00:15:09.769 "nvme_io": false, 00:15:09.769 "nvme_io_md": false, 00:15:09.769 "write_zeroes": true, 00:15:09.769 "zcopy": true, 00:15:09.769 "get_zone_info": false, 00:15:09.769 "zone_management": false, 00:15:09.769 "zone_append": false, 00:15:09.769 "compare": false, 00:15:09.769 "compare_and_write": false, 00:15:09.769 "abort": true, 00:15:09.769 "seek_hole": false, 00:15:09.769 "seek_data": false, 00:15:09.769 "copy": true, 00:15:09.769 "nvme_iov_md": false 00:15:09.769 }, 00:15:09.769 "memory_domains": [ 00:15:09.769 { 00:15:09.769 "dma_device_id": "system", 00:15:09.769 "dma_device_type": 1 00:15:09.769 }, 00:15:09.769 { 00:15:09.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.769 "dma_device_type": 2 00:15:09.769 } 00:15:09.769 ], 00:15:09.769 "driver_specific": {} 00:15:09.769 } 00:15:09.769 ] 00:15:09.769 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.770 BaseBdev3 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.770 [ 00:15:09.770 { 00:15:09.770 "name": "BaseBdev3", 00:15:09.770 "aliases": [ 00:15:09.770 "40aadeb9-3db4-4b67-a8d8-35b1316ab65d" 00:15:09.770 ], 00:15:09.770 "product_name": "Malloc disk", 00:15:09.770 "block_size": 512, 00:15:09.770 "num_blocks": 65536, 00:15:09.770 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:09.770 "assigned_rate_limits": { 00:15:09.770 "rw_ios_per_sec": 0, 00:15:09.770 "rw_mbytes_per_sec": 0, 00:15:09.770 "r_mbytes_per_sec": 0, 00:15:09.770 "w_mbytes_per_sec": 0 00:15:09.770 }, 00:15:09.770 "claimed": false, 00:15:09.770 "zoned": false, 00:15:09.770 "supported_io_types": { 00:15:09.770 "read": true, 00:15:09.770 "write": true, 00:15:09.770 "unmap": true, 00:15:09.770 "flush": true, 00:15:09.770 "reset": true, 00:15:09.770 "nvme_admin": false, 00:15:09.770 "nvme_io": false, 00:15:09.770 "nvme_io_md": false, 00:15:09.770 "write_zeroes": true, 00:15:09.770 "zcopy": true, 00:15:09.770 "get_zone_info": false, 00:15:09.770 "zone_management": false, 00:15:09.770 "zone_append": false, 00:15:09.770 "compare": false, 00:15:09.770 "compare_and_write": false, 00:15:09.770 "abort": true, 00:15:09.770 "seek_hole": false, 00:15:09.770 "seek_data": false, 00:15:09.770 "copy": true, 00:15:09.770 "nvme_iov_md": false 00:15:09.770 }, 00:15:09.770 "memory_domains": [ 00:15:09.770 { 00:15:09.770 "dma_device_id": "system", 00:15:09.770 "dma_device_type": 1 00:15:09.770 }, 00:15:09.770 { 00:15:09.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.770 "dma_device_type": 2 00:15:09.770 } 00:15:09.770 ], 00:15:09.770 "driver_specific": {} 00:15:09.770 } 00:15:09.770 ] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.770 BaseBdev4 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.771 [ 00:15:09.771 { 00:15:09.771 "name": "BaseBdev4", 00:15:09.771 "aliases": [ 00:15:09.771 "8873c0c6-514d-4ee4-a3cc-04ea028ec73f" 00:15:09.771 ], 00:15:09.771 "product_name": "Malloc disk", 00:15:09.771 "block_size": 512, 00:15:09.771 "num_blocks": 65536, 00:15:09.771 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:09.771 "assigned_rate_limits": { 00:15:09.771 "rw_ios_per_sec": 0, 00:15:09.771 "rw_mbytes_per_sec": 0, 00:15:09.771 "r_mbytes_per_sec": 0, 00:15:09.771 "w_mbytes_per_sec": 0 00:15:09.771 }, 00:15:09.771 "claimed": false, 00:15:09.771 "zoned": false, 00:15:09.771 "supported_io_types": { 00:15:09.771 "read": true, 00:15:09.771 "write": true, 00:15:09.771 "unmap": true, 00:15:09.771 "flush": true, 00:15:09.771 "reset": true, 00:15:09.771 "nvme_admin": false, 00:15:09.771 "nvme_io": false, 00:15:09.771 "nvme_io_md": false, 00:15:09.771 "write_zeroes": true, 00:15:09.771 "zcopy": true, 00:15:09.771 "get_zone_info": false, 00:15:09.771 "zone_management": false, 00:15:09.771 "zone_append": false, 00:15:09.771 "compare": false, 00:15:09.771 "compare_and_write": false, 00:15:09.771 "abort": true, 00:15:09.771 "seek_hole": false, 00:15:09.771 "seek_data": false, 00:15:09.771 "copy": true, 00:15:09.771 "nvme_iov_md": false 00:15:09.771 }, 00:15:09.771 "memory_domains": [ 00:15:09.771 { 00:15:09.771 "dma_device_id": "system", 00:15:09.771 "dma_device_type": 1 00:15:09.771 }, 00:15:09.771 { 00:15:09.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.771 "dma_device_type": 2 00:15:09.771 } 00:15:09.771 ], 00:15:09.771 "driver_specific": {} 00:15:09.771 } 00:15:09.771 ] 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.771 [2024-11-27 14:14:40.215851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.771 [2024-11-27 14:14:40.216106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.771 [2024-11-27 14:14:40.216314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.771 [2024-11-27 14:14:40.218935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.771 [2024-11-27 14:14:40.219159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.771 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.772 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.772 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.772 "name": "Existed_Raid", 00:15:09.772 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:09.772 "strip_size_kb": 64, 00:15:09.772 "state": "configuring", 00:15:09.772 "raid_level": "concat", 00:15:09.772 "superblock": true, 00:15:09.772 "num_base_bdevs": 4, 00:15:09.772 "num_base_bdevs_discovered": 3, 00:15:09.772 "num_base_bdevs_operational": 4, 00:15:09.772 "base_bdevs_list": [ 00:15:09.772 { 00:15:09.772 "name": "BaseBdev1", 00:15:09.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.772 "is_configured": false, 00:15:09.772 "data_offset": 0, 00:15:09.772 "data_size": 0 00:15:09.772 }, 00:15:09.772 { 00:15:09.772 "name": "BaseBdev2", 00:15:09.772 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:09.772 "is_configured": true, 00:15:09.772 "data_offset": 2048, 00:15:09.772 "data_size": 63488 00:15:09.772 }, 00:15:09.772 { 00:15:09.772 "name": "BaseBdev3", 00:15:09.772 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:09.772 "is_configured": true, 00:15:09.772 "data_offset": 2048, 00:15:09.772 "data_size": 63488 00:15:09.772 }, 00:15:09.772 { 00:15:09.772 "name": "BaseBdev4", 00:15:09.772 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:09.772 "is_configured": true, 00:15:09.772 "data_offset": 2048, 00:15:09.772 "data_size": 63488 00:15:09.772 } 00:15:09.772 ] 00:15:09.772 }' 00:15:09.772 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.772 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.346 [2024-11-27 14:14:40.732039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.346 "name": "Existed_Raid", 00:15:10.346 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:10.346 "strip_size_kb": 64, 00:15:10.346 "state": "configuring", 00:15:10.346 "raid_level": "concat", 00:15:10.346 "superblock": true, 00:15:10.346 "num_base_bdevs": 4, 00:15:10.346 "num_base_bdevs_discovered": 2, 00:15:10.346 "num_base_bdevs_operational": 4, 00:15:10.346 "base_bdevs_list": [ 00:15:10.346 { 00:15:10.346 "name": "BaseBdev1", 00:15:10.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.346 "is_configured": false, 00:15:10.346 "data_offset": 0, 00:15:10.346 "data_size": 0 00:15:10.346 }, 00:15:10.346 { 00:15:10.346 "name": null, 00:15:10.346 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:10.346 "is_configured": false, 00:15:10.346 "data_offset": 0, 00:15:10.346 "data_size": 63488 00:15:10.346 }, 00:15:10.346 { 00:15:10.346 "name": "BaseBdev3", 00:15:10.346 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:10.346 "is_configured": true, 00:15:10.346 "data_offset": 2048, 00:15:10.346 "data_size": 63488 00:15:10.346 }, 00:15:10.346 { 00:15:10.346 "name": "BaseBdev4", 00:15:10.346 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:10.346 "is_configured": true, 00:15:10.346 "data_offset": 2048, 00:15:10.346 "data_size": 63488 00:15:10.346 } 00:15:10.346 ] 00:15:10.346 }' 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.346 14:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.929 [2024-11-27 14:14:41.332136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.929 BaseBdev1 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.929 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.929 [ 00:15:10.929 { 00:15:10.929 "name": "BaseBdev1", 00:15:10.929 "aliases": [ 00:15:10.929 "9948b63d-8efe-437a-80d5-a555e45519a0" 00:15:10.929 ], 00:15:10.929 "product_name": "Malloc disk", 00:15:10.929 "block_size": 512, 00:15:10.929 "num_blocks": 65536, 00:15:10.929 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:10.929 "assigned_rate_limits": { 00:15:10.929 "rw_ios_per_sec": 0, 00:15:10.929 "rw_mbytes_per_sec": 0, 00:15:10.929 "r_mbytes_per_sec": 0, 00:15:10.929 "w_mbytes_per_sec": 0 00:15:10.929 }, 00:15:10.929 "claimed": true, 00:15:10.929 "claim_type": "exclusive_write", 00:15:10.929 "zoned": false, 00:15:10.929 "supported_io_types": { 00:15:10.929 "read": true, 00:15:10.929 "write": true, 00:15:10.929 "unmap": true, 00:15:10.929 "flush": true, 00:15:10.929 "reset": true, 00:15:10.929 "nvme_admin": false, 00:15:10.929 "nvme_io": false, 00:15:10.930 "nvme_io_md": false, 00:15:10.930 "write_zeroes": true, 00:15:10.930 "zcopy": true, 00:15:10.930 "get_zone_info": false, 00:15:10.930 "zone_management": false, 00:15:10.930 "zone_append": false, 00:15:10.930 "compare": false, 00:15:10.930 "compare_and_write": false, 00:15:10.930 "abort": true, 00:15:10.930 "seek_hole": false, 00:15:10.930 "seek_data": false, 00:15:10.930 "copy": true, 00:15:10.930 "nvme_iov_md": false 00:15:10.930 }, 00:15:10.930 "memory_domains": [ 00:15:10.930 { 00:15:10.930 "dma_device_id": "system", 00:15:10.930 "dma_device_type": 1 00:15:10.930 }, 00:15:10.930 { 00:15:10.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.930 "dma_device_type": 2 00:15:10.930 } 00:15:10.930 ], 00:15:10.930 "driver_specific": {} 00:15:10.930 } 00:15:10.930 ] 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.930 "name": "Existed_Raid", 00:15:10.930 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:10.930 "strip_size_kb": 64, 00:15:10.930 "state": "configuring", 00:15:10.930 "raid_level": "concat", 00:15:10.930 "superblock": true, 00:15:10.930 "num_base_bdevs": 4, 00:15:10.930 "num_base_bdevs_discovered": 3, 00:15:10.930 "num_base_bdevs_operational": 4, 00:15:10.930 "base_bdevs_list": [ 00:15:10.930 { 00:15:10.930 "name": "BaseBdev1", 00:15:10.930 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:10.930 "is_configured": true, 00:15:10.930 "data_offset": 2048, 00:15:10.930 "data_size": 63488 00:15:10.930 }, 00:15:10.930 { 00:15:10.930 "name": null, 00:15:10.930 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:10.930 "is_configured": false, 00:15:10.930 "data_offset": 0, 00:15:10.930 "data_size": 63488 00:15:10.930 }, 00:15:10.930 { 00:15:10.930 "name": "BaseBdev3", 00:15:10.930 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:10.930 "is_configured": true, 00:15:10.930 "data_offset": 2048, 00:15:10.930 "data_size": 63488 00:15:10.930 }, 00:15:10.930 { 00:15:10.930 "name": "BaseBdev4", 00:15:10.930 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:10.930 "is_configured": true, 00:15:10.930 "data_offset": 2048, 00:15:10.930 "data_size": 63488 00:15:10.930 } 00:15:10.930 ] 00:15:10.930 }' 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.930 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.497 [2024-11-27 14:14:41.944476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.497 "name": "Existed_Raid", 00:15:11.497 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:11.497 "strip_size_kb": 64, 00:15:11.497 "state": "configuring", 00:15:11.497 "raid_level": "concat", 00:15:11.497 "superblock": true, 00:15:11.497 "num_base_bdevs": 4, 00:15:11.497 "num_base_bdevs_discovered": 2, 00:15:11.497 "num_base_bdevs_operational": 4, 00:15:11.497 "base_bdevs_list": [ 00:15:11.497 { 00:15:11.497 "name": "BaseBdev1", 00:15:11.497 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:11.497 "is_configured": true, 00:15:11.497 "data_offset": 2048, 00:15:11.497 "data_size": 63488 00:15:11.497 }, 00:15:11.497 { 00:15:11.497 "name": null, 00:15:11.497 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:11.497 "is_configured": false, 00:15:11.497 "data_offset": 0, 00:15:11.497 "data_size": 63488 00:15:11.497 }, 00:15:11.497 { 00:15:11.497 "name": null, 00:15:11.497 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:11.497 "is_configured": false, 00:15:11.497 "data_offset": 0, 00:15:11.497 "data_size": 63488 00:15:11.497 }, 00:15:11.497 { 00:15:11.497 "name": "BaseBdev4", 00:15:11.497 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:11.497 "is_configured": true, 00:15:11.497 "data_offset": 2048, 00:15:11.497 "data_size": 63488 00:15:11.497 } 00:15:11.497 ] 00:15:11.497 }' 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.497 14:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.065 [2024-11-27 14:14:42.508632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.065 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.066 "name": "Existed_Raid", 00:15:12.066 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:12.066 "strip_size_kb": 64, 00:15:12.066 "state": "configuring", 00:15:12.066 "raid_level": "concat", 00:15:12.066 "superblock": true, 00:15:12.066 "num_base_bdevs": 4, 00:15:12.066 "num_base_bdevs_discovered": 3, 00:15:12.066 "num_base_bdevs_operational": 4, 00:15:12.066 "base_bdevs_list": [ 00:15:12.066 { 00:15:12.066 "name": "BaseBdev1", 00:15:12.066 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:12.066 "is_configured": true, 00:15:12.066 "data_offset": 2048, 00:15:12.066 "data_size": 63488 00:15:12.066 }, 00:15:12.066 { 00:15:12.066 "name": null, 00:15:12.066 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:12.066 "is_configured": false, 00:15:12.066 "data_offset": 0, 00:15:12.066 "data_size": 63488 00:15:12.066 }, 00:15:12.066 { 00:15:12.066 "name": "BaseBdev3", 00:15:12.066 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:12.066 "is_configured": true, 00:15:12.066 "data_offset": 2048, 00:15:12.066 "data_size": 63488 00:15:12.066 }, 00:15:12.066 { 00:15:12.066 "name": "BaseBdev4", 00:15:12.066 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:12.066 "is_configured": true, 00:15:12.066 "data_offset": 2048, 00:15:12.066 "data_size": 63488 00:15:12.066 } 00:15:12.066 ] 00:15:12.066 }' 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.066 14:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.632 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.632 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.633 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.633 [2024-11-27 14:14:43.096858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.890 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.891 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.891 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.891 "name": "Existed_Raid", 00:15:12.891 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:12.891 "strip_size_kb": 64, 00:15:12.891 "state": "configuring", 00:15:12.891 "raid_level": "concat", 00:15:12.891 "superblock": true, 00:15:12.891 "num_base_bdevs": 4, 00:15:12.891 "num_base_bdevs_discovered": 2, 00:15:12.891 "num_base_bdevs_operational": 4, 00:15:12.891 "base_bdevs_list": [ 00:15:12.891 { 00:15:12.891 "name": null, 00:15:12.891 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:12.891 "is_configured": false, 00:15:12.891 "data_offset": 0, 00:15:12.891 "data_size": 63488 00:15:12.891 }, 00:15:12.891 { 00:15:12.891 "name": null, 00:15:12.891 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:12.891 "is_configured": false, 00:15:12.891 "data_offset": 0, 00:15:12.891 "data_size": 63488 00:15:12.891 }, 00:15:12.891 { 00:15:12.891 "name": "BaseBdev3", 00:15:12.891 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:12.891 "is_configured": true, 00:15:12.891 "data_offset": 2048, 00:15:12.891 "data_size": 63488 00:15:12.891 }, 00:15:12.891 { 00:15:12.891 "name": "BaseBdev4", 00:15:12.891 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:12.891 "is_configured": true, 00:15:12.891 "data_offset": 2048, 00:15:12.891 "data_size": 63488 00:15:12.891 } 00:15:12.891 ] 00:15:12.891 }' 00:15:12.891 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.891 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.458 [2024-11-27 14:14:43.732144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.458 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.458 "name": "Existed_Raid", 00:15:13.458 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:13.458 "strip_size_kb": 64, 00:15:13.458 "state": "configuring", 00:15:13.459 "raid_level": "concat", 00:15:13.459 "superblock": true, 00:15:13.459 "num_base_bdevs": 4, 00:15:13.459 "num_base_bdevs_discovered": 3, 00:15:13.459 "num_base_bdevs_operational": 4, 00:15:13.459 "base_bdevs_list": [ 00:15:13.459 { 00:15:13.459 "name": null, 00:15:13.459 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:13.459 "is_configured": false, 00:15:13.459 "data_offset": 0, 00:15:13.459 "data_size": 63488 00:15:13.459 }, 00:15:13.459 { 00:15:13.459 "name": "BaseBdev2", 00:15:13.459 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:13.459 "is_configured": true, 00:15:13.459 "data_offset": 2048, 00:15:13.459 "data_size": 63488 00:15:13.459 }, 00:15:13.459 { 00:15:13.459 "name": "BaseBdev3", 00:15:13.459 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:13.459 "is_configured": true, 00:15:13.459 "data_offset": 2048, 00:15:13.459 "data_size": 63488 00:15:13.459 }, 00:15:13.459 { 00:15:13.459 "name": "BaseBdev4", 00:15:13.459 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:13.459 "is_configured": true, 00:15:13.459 "data_offset": 2048, 00:15:13.459 "data_size": 63488 00:15:13.459 } 00:15:13.459 ] 00:15:13.459 }' 00:15:13.459 14:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.459 14:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9948b63d-8efe-437a-80d5-a555e45519a0 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.026 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.026 [2024-11-27 14:14:44.402994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:14.026 [2024-11-27 14:14:44.403296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:14.026 [2024-11-27 14:14:44.403322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:14.026 [2024-11-27 14:14:44.403658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:14.026 [2024-11-27 14:14:44.403868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:14.026 [2024-11-27 14:14:44.403899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:14.027 [2024-11-27 14:14:44.404074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.027 NewBaseBdev 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.027 [ 00:15:14.027 { 00:15:14.027 "name": "NewBaseBdev", 00:15:14.027 "aliases": [ 00:15:14.027 "9948b63d-8efe-437a-80d5-a555e45519a0" 00:15:14.027 ], 00:15:14.027 "product_name": "Malloc disk", 00:15:14.027 "block_size": 512, 00:15:14.027 "num_blocks": 65536, 00:15:14.027 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:14.027 "assigned_rate_limits": { 00:15:14.027 "rw_ios_per_sec": 0, 00:15:14.027 "rw_mbytes_per_sec": 0, 00:15:14.027 "r_mbytes_per_sec": 0, 00:15:14.027 "w_mbytes_per_sec": 0 00:15:14.027 }, 00:15:14.027 "claimed": true, 00:15:14.027 "claim_type": "exclusive_write", 00:15:14.027 "zoned": false, 00:15:14.027 "supported_io_types": { 00:15:14.027 "read": true, 00:15:14.027 "write": true, 00:15:14.027 "unmap": true, 00:15:14.027 "flush": true, 00:15:14.027 "reset": true, 00:15:14.027 "nvme_admin": false, 00:15:14.027 "nvme_io": false, 00:15:14.027 "nvme_io_md": false, 00:15:14.027 "write_zeroes": true, 00:15:14.027 "zcopy": true, 00:15:14.027 "get_zone_info": false, 00:15:14.027 "zone_management": false, 00:15:14.027 "zone_append": false, 00:15:14.027 "compare": false, 00:15:14.027 "compare_and_write": false, 00:15:14.027 "abort": true, 00:15:14.027 "seek_hole": false, 00:15:14.027 "seek_data": false, 00:15:14.027 "copy": true, 00:15:14.027 "nvme_iov_md": false 00:15:14.027 }, 00:15:14.027 "memory_domains": [ 00:15:14.027 { 00:15:14.027 "dma_device_id": "system", 00:15:14.027 "dma_device_type": 1 00:15:14.027 }, 00:15:14.027 { 00:15:14.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.027 "dma_device_type": 2 00:15:14.027 } 00:15:14.027 ], 00:15:14.027 "driver_specific": {} 00:15:14.027 } 00:15:14.027 ] 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.027 "name": "Existed_Raid", 00:15:14.027 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:14.027 "strip_size_kb": 64, 00:15:14.027 "state": "online", 00:15:14.027 "raid_level": "concat", 00:15:14.027 "superblock": true, 00:15:14.027 "num_base_bdevs": 4, 00:15:14.027 "num_base_bdevs_discovered": 4, 00:15:14.027 "num_base_bdevs_operational": 4, 00:15:14.027 "base_bdevs_list": [ 00:15:14.027 { 00:15:14.027 "name": "NewBaseBdev", 00:15:14.027 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:14.027 "is_configured": true, 00:15:14.027 "data_offset": 2048, 00:15:14.027 "data_size": 63488 00:15:14.027 }, 00:15:14.027 { 00:15:14.027 "name": "BaseBdev2", 00:15:14.027 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:14.027 "is_configured": true, 00:15:14.027 "data_offset": 2048, 00:15:14.027 "data_size": 63488 00:15:14.027 }, 00:15:14.027 { 00:15:14.027 "name": "BaseBdev3", 00:15:14.027 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:14.027 "is_configured": true, 00:15:14.027 "data_offset": 2048, 00:15:14.027 "data_size": 63488 00:15:14.027 }, 00:15:14.027 { 00:15:14.027 "name": "BaseBdev4", 00:15:14.027 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:14.027 "is_configured": true, 00:15:14.027 "data_offset": 2048, 00:15:14.027 "data_size": 63488 00:15:14.027 } 00:15:14.027 ] 00:15:14.027 }' 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.027 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.700 [2024-11-27 14:14:44.959622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.700 14:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.700 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:14.700 "name": "Existed_Raid", 00:15:14.700 "aliases": [ 00:15:14.700 "6f887401-c8fc-459b-9a38-b9713645f988" 00:15:14.700 ], 00:15:14.700 "product_name": "Raid Volume", 00:15:14.700 "block_size": 512, 00:15:14.700 "num_blocks": 253952, 00:15:14.700 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:14.700 "assigned_rate_limits": { 00:15:14.700 "rw_ios_per_sec": 0, 00:15:14.700 "rw_mbytes_per_sec": 0, 00:15:14.700 "r_mbytes_per_sec": 0, 00:15:14.700 "w_mbytes_per_sec": 0 00:15:14.700 }, 00:15:14.700 "claimed": false, 00:15:14.700 "zoned": false, 00:15:14.700 "supported_io_types": { 00:15:14.700 "read": true, 00:15:14.700 "write": true, 00:15:14.700 "unmap": true, 00:15:14.700 "flush": true, 00:15:14.700 "reset": true, 00:15:14.700 "nvme_admin": false, 00:15:14.700 "nvme_io": false, 00:15:14.700 "nvme_io_md": false, 00:15:14.700 "write_zeroes": true, 00:15:14.700 "zcopy": false, 00:15:14.700 "get_zone_info": false, 00:15:14.700 "zone_management": false, 00:15:14.700 "zone_append": false, 00:15:14.700 "compare": false, 00:15:14.700 "compare_and_write": false, 00:15:14.700 "abort": false, 00:15:14.700 "seek_hole": false, 00:15:14.700 "seek_data": false, 00:15:14.700 "copy": false, 00:15:14.700 "nvme_iov_md": false 00:15:14.700 }, 00:15:14.700 "memory_domains": [ 00:15:14.700 { 00:15:14.700 "dma_device_id": "system", 00:15:14.700 "dma_device_type": 1 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.700 "dma_device_type": 2 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "system", 00:15:14.700 "dma_device_type": 1 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.700 "dma_device_type": 2 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "system", 00:15:14.700 "dma_device_type": 1 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.700 "dma_device_type": 2 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "system", 00:15:14.700 "dma_device_type": 1 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.700 "dma_device_type": 2 00:15:14.700 } 00:15:14.700 ], 00:15:14.700 "driver_specific": { 00:15:14.700 "raid": { 00:15:14.700 "uuid": "6f887401-c8fc-459b-9a38-b9713645f988", 00:15:14.700 "strip_size_kb": 64, 00:15:14.700 "state": "online", 00:15:14.700 "raid_level": "concat", 00:15:14.700 "superblock": true, 00:15:14.700 "num_base_bdevs": 4, 00:15:14.700 "num_base_bdevs_discovered": 4, 00:15:14.700 "num_base_bdevs_operational": 4, 00:15:14.700 "base_bdevs_list": [ 00:15:14.700 { 00:15:14.700 "name": "NewBaseBdev", 00:15:14.700 "uuid": "9948b63d-8efe-437a-80d5-a555e45519a0", 00:15:14.700 "is_configured": true, 00:15:14.700 "data_offset": 2048, 00:15:14.700 "data_size": 63488 00:15:14.700 }, 00:15:14.700 { 00:15:14.700 "name": "BaseBdev2", 00:15:14.700 "uuid": "61c6caad-17bf-4cbb-8d1b-7363cb62a5bb", 00:15:14.700 "is_configured": true, 00:15:14.700 "data_offset": 2048, 00:15:14.700 "data_size": 63488 00:15:14.700 }, 00:15:14.701 { 00:15:14.701 "name": "BaseBdev3", 00:15:14.701 "uuid": "40aadeb9-3db4-4b67-a8d8-35b1316ab65d", 00:15:14.701 "is_configured": true, 00:15:14.701 "data_offset": 2048, 00:15:14.701 "data_size": 63488 00:15:14.701 }, 00:15:14.701 { 00:15:14.701 "name": "BaseBdev4", 00:15:14.701 "uuid": "8873c0c6-514d-4ee4-a3cc-04ea028ec73f", 00:15:14.701 "is_configured": true, 00:15:14.701 "data_offset": 2048, 00:15:14.701 "data_size": 63488 00:15:14.701 } 00:15:14.701 ] 00:15:14.701 } 00:15:14.701 } 00:15:14.701 }' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:14.701 BaseBdev2 00:15:14.701 BaseBdev3 00:15:14.701 BaseBdev4' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.701 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:14.983 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.984 [2024-11-27 14:14:45.315262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.984 [2024-11-27 14:14:45.315302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.984 [2024-11-27 14:14:45.315393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.984 [2024-11-27 14:14:45.315500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.984 [2024-11-27 14:14:45.315529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72263 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72263 ']' 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72263 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72263 00:15:14.984 killing process with pid 72263 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72263' 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72263 00:15:14.984 [2024-11-27 14:14:45.349700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.984 14:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72263 00:15:15.243 [2024-11-27 14:14:45.706838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.620 14:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:16.620 00:15:16.620 real 0m12.752s 00:15:16.620 user 0m21.076s 00:15:16.620 sys 0m1.750s 00:15:16.620 ************************************ 00:15:16.620 END TEST raid_state_function_test_sb 00:15:16.620 ************************************ 00:15:16.620 14:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.620 14:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.620 14:14:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:16.620 14:14:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:16.620 14:14:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.620 14:14:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.620 ************************************ 00:15:16.620 START TEST raid_superblock_test 00:15:16.620 ************************************ 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72946 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72946 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72946 ']' 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.620 14:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.620 [2024-11-27 14:14:46.980959] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:16.620 [2024-11-27 14:14:46.981144] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72946 ] 00:15:16.879 [2024-11-27 14:14:47.166739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.879 [2024-11-27 14:14:47.299598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.137 [2024-11-27 14:14:47.515577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.137 [2024-11-27 14:14:47.515620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.705 14:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 malloc1 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 [2024-11-27 14:14:48.052098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.705 [2024-11-27 14:14:48.052166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.705 [2024-11-27 14:14:48.052200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:17.705 [2024-11-27 14:14:48.052216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.705 [2024-11-27 14:14:48.055060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.705 [2024-11-27 14:14:48.055104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.705 pt1 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 malloc2 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 [2024-11-27 14:14:48.110001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.705 [2024-11-27 14:14:48.110077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.705 [2024-11-27 14:14:48.110116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:17.705 [2024-11-27 14:14:48.110132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.705 [2024-11-27 14:14:48.112850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.705 [2024-11-27 14:14:48.112890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.705 pt2 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 malloc3 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 [2024-11-27 14:14:48.174397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:17.705 [2024-11-27 14:14:48.174457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.705 [2024-11-27 14:14:48.174492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:17.705 [2024-11-27 14:14:48.174508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.705 [2024-11-27 14:14:48.177632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.705 [2024-11-27 14:14:48.177687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:17.705 pt3 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.705 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.706 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.706 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:17.706 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.706 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.964 malloc4 00:15:17.964 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.964 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:17.964 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.964 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.965 [2024-11-27 14:14:48.227790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:17.965 [2024-11-27 14:14:48.227881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.965 [2024-11-27 14:14:48.227914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:17.965 [2024-11-27 14:14:48.227932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.965 [2024-11-27 14:14:48.231278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.965 [2024-11-27 14:14:48.231365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:17.965 pt4 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.965 [2024-11-27 14:14:48.236004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.965 [2024-11-27 14:14:48.238436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.965 [2024-11-27 14:14:48.238565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:17.965 [2024-11-27 14:14:48.238641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:17.965 [2024-11-27 14:14:48.238903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:17.965 [2024-11-27 14:14:48.238933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:17.965 [2024-11-27 14:14:48.239263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:17.965 [2024-11-27 14:14:48.239491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:17.965 [2024-11-27 14:14:48.239522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:17.965 [2024-11-27 14:14:48.239702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.965 "name": "raid_bdev1", 00:15:17.965 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:17.965 "strip_size_kb": 64, 00:15:17.965 "state": "online", 00:15:17.965 "raid_level": "concat", 00:15:17.965 "superblock": true, 00:15:17.965 "num_base_bdevs": 4, 00:15:17.965 "num_base_bdevs_discovered": 4, 00:15:17.965 "num_base_bdevs_operational": 4, 00:15:17.965 "base_bdevs_list": [ 00:15:17.965 { 00:15:17.965 "name": "pt1", 00:15:17.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.965 "is_configured": true, 00:15:17.965 "data_offset": 2048, 00:15:17.965 "data_size": 63488 00:15:17.965 }, 00:15:17.965 { 00:15:17.965 "name": "pt2", 00:15:17.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.965 "is_configured": true, 00:15:17.965 "data_offset": 2048, 00:15:17.965 "data_size": 63488 00:15:17.965 }, 00:15:17.965 { 00:15:17.965 "name": "pt3", 00:15:17.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.965 "is_configured": true, 00:15:17.965 "data_offset": 2048, 00:15:17.965 "data_size": 63488 00:15:17.965 }, 00:15:17.965 { 00:15:17.965 "name": "pt4", 00:15:17.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:17.965 "is_configured": true, 00:15:17.965 "data_offset": 2048, 00:15:17.965 "data_size": 63488 00:15:17.965 } 00:15:17.965 ] 00:15:17.965 }' 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.965 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.532 [2024-11-27 14:14:48.760535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.532 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:18.532 "name": "raid_bdev1", 00:15:18.532 "aliases": [ 00:15:18.532 "6ae3aba7-243a-4ab4-afc7-184f51af44c9" 00:15:18.532 ], 00:15:18.532 "product_name": "Raid Volume", 00:15:18.532 "block_size": 512, 00:15:18.532 "num_blocks": 253952, 00:15:18.532 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:18.532 "assigned_rate_limits": { 00:15:18.532 "rw_ios_per_sec": 0, 00:15:18.532 "rw_mbytes_per_sec": 0, 00:15:18.532 "r_mbytes_per_sec": 0, 00:15:18.532 "w_mbytes_per_sec": 0 00:15:18.532 }, 00:15:18.532 "claimed": false, 00:15:18.532 "zoned": false, 00:15:18.532 "supported_io_types": { 00:15:18.532 "read": true, 00:15:18.532 "write": true, 00:15:18.532 "unmap": true, 00:15:18.532 "flush": true, 00:15:18.532 "reset": true, 00:15:18.532 "nvme_admin": false, 00:15:18.532 "nvme_io": false, 00:15:18.532 "nvme_io_md": false, 00:15:18.532 "write_zeroes": true, 00:15:18.532 "zcopy": false, 00:15:18.532 "get_zone_info": false, 00:15:18.532 "zone_management": false, 00:15:18.532 "zone_append": false, 00:15:18.532 "compare": false, 00:15:18.532 "compare_and_write": false, 00:15:18.532 "abort": false, 00:15:18.532 "seek_hole": false, 00:15:18.532 "seek_data": false, 00:15:18.532 "copy": false, 00:15:18.532 "nvme_iov_md": false 00:15:18.532 }, 00:15:18.532 "memory_domains": [ 00:15:18.532 { 00:15:18.532 "dma_device_id": "system", 00:15:18.532 "dma_device_type": 1 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.532 "dma_device_type": 2 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "system", 00:15:18.532 "dma_device_type": 1 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.532 "dma_device_type": 2 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "system", 00:15:18.532 "dma_device_type": 1 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.532 "dma_device_type": 2 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "system", 00:15:18.532 "dma_device_type": 1 00:15:18.532 }, 00:15:18.532 { 00:15:18.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.532 "dma_device_type": 2 00:15:18.532 } 00:15:18.532 ], 00:15:18.532 "driver_specific": { 00:15:18.532 "raid": { 00:15:18.532 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:18.532 "strip_size_kb": 64, 00:15:18.532 "state": "online", 00:15:18.532 "raid_level": "concat", 00:15:18.532 "superblock": true, 00:15:18.532 "num_base_bdevs": 4, 00:15:18.532 "num_base_bdevs_discovered": 4, 00:15:18.532 "num_base_bdevs_operational": 4, 00:15:18.532 "base_bdevs_list": [ 00:15:18.532 { 00:15:18.532 "name": "pt1", 00:15:18.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.532 "is_configured": true, 00:15:18.532 "data_offset": 2048, 00:15:18.532 "data_size": 63488 00:15:18.532 }, 00:15:18.532 { 00:15:18.533 "name": "pt2", 00:15:18.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.533 "is_configured": true, 00:15:18.533 "data_offset": 2048, 00:15:18.533 "data_size": 63488 00:15:18.533 }, 00:15:18.533 { 00:15:18.533 "name": "pt3", 00:15:18.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.533 "is_configured": true, 00:15:18.533 "data_offset": 2048, 00:15:18.533 "data_size": 63488 00:15:18.533 }, 00:15:18.533 { 00:15:18.533 "name": "pt4", 00:15:18.533 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:18.533 "is_configured": true, 00:15:18.533 "data_offset": 2048, 00:15:18.533 "data_size": 63488 00:15:18.533 } 00:15:18.533 ] 00:15:18.533 } 00:15:18.533 } 00:15:18.533 }' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:18.533 pt2 00:15:18.533 pt3 00:15:18.533 pt4' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.533 14:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.533 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 [2024-11-27 14:14:49.128680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6ae3aba7-243a-4ab4-afc7-184f51af44c9 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6ae3aba7-243a-4ab4-afc7-184f51af44c9 ']' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 [2024-11-27 14:14:49.176293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.792 [2024-11-27 14:14:49.176323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.792 [2024-11-27 14:14:49.176435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.792 [2024-11-27 14:14:49.176524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.792 [2024-11-27 14:14:49.176596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:18.792 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.051 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.051 [2024-11-27 14:14:49.332358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:19.051 [2024-11-27 14:14:49.335118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:19.052 [2024-11-27 14:14:49.335191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:19.052 [2024-11-27 14:14:49.335245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:19.052 [2024-11-27 14:14:49.335320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:19.052 [2024-11-27 14:14:49.335393] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:19.052 [2024-11-27 14:14:49.335427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:19.052 [2024-11-27 14:14:49.335458] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:19.052 [2024-11-27 14:14:49.335479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.052 [2024-11-27 14:14:49.335495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:19.052 request: 00:15:19.052 { 00:15:19.052 "name": "raid_bdev1", 00:15:19.052 "raid_level": "concat", 00:15:19.052 "base_bdevs": [ 00:15:19.052 "malloc1", 00:15:19.052 "malloc2", 00:15:19.052 "malloc3", 00:15:19.052 "malloc4" 00:15:19.052 ], 00:15:19.052 "strip_size_kb": 64, 00:15:19.052 "superblock": false, 00:15:19.052 "method": "bdev_raid_create", 00:15:19.052 "req_id": 1 00:15:19.052 } 00:15:19.052 Got JSON-RPC error response 00:15:19.052 response: 00:15:19.052 { 00:15:19.052 "code": -17, 00:15:19.052 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:19.052 } 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.052 [2024-11-27 14:14:49.396364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:19.052 [2024-11-27 14:14:49.396450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.052 [2024-11-27 14:14:49.396479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:19.052 [2024-11-27 14:14:49.396496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.052 [2024-11-27 14:14:49.399498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.052 [2024-11-27 14:14:49.399557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:19.052 [2024-11-27 14:14:49.399656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:19.052 [2024-11-27 14:14:49.399727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:19.052 pt1 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.052 "name": "raid_bdev1", 00:15:19.052 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:19.052 "strip_size_kb": 64, 00:15:19.052 "state": "configuring", 00:15:19.052 "raid_level": "concat", 00:15:19.052 "superblock": true, 00:15:19.052 "num_base_bdevs": 4, 00:15:19.052 "num_base_bdevs_discovered": 1, 00:15:19.052 "num_base_bdevs_operational": 4, 00:15:19.052 "base_bdevs_list": [ 00:15:19.052 { 00:15:19.052 "name": "pt1", 00:15:19.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.052 "is_configured": true, 00:15:19.052 "data_offset": 2048, 00:15:19.052 "data_size": 63488 00:15:19.052 }, 00:15:19.052 { 00:15:19.052 "name": null, 00:15:19.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.052 "is_configured": false, 00:15:19.052 "data_offset": 2048, 00:15:19.052 "data_size": 63488 00:15:19.052 }, 00:15:19.052 { 00:15:19.052 "name": null, 00:15:19.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.052 "is_configured": false, 00:15:19.052 "data_offset": 2048, 00:15:19.052 "data_size": 63488 00:15:19.052 }, 00:15:19.052 { 00:15:19.052 "name": null, 00:15:19.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.052 "is_configured": false, 00:15:19.052 "data_offset": 2048, 00:15:19.052 "data_size": 63488 00:15:19.052 } 00:15:19.052 ] 00:15:19.052 }' 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.052 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.620 [2024-11-27 14:14:49.960619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.620 [2024-11-27 14:14:49.960722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.620 [2024-11-27 14:14:49.960751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:19.620 [2024-11-27 14:14:49.960769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.620 [2024-11-27 14:14:49.961411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.620 [2024-11-27 14:14:49.961447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.620 [2024-11-27 14:14:49.961578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:19.620 [2024-11-27 14:14:49.961616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.620 pt2 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.620 [2024-11-27 14:14:49.968581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.620 14:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.620 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.620 "name": "raid_bdev1", 00:15:19.620 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:19.620 "strip_size_kb": 64, 00:15:19.620 "state": "configuring", 00:15:19.620 "raid_level": "concat", 00:15:19.620 "superblock": true, 00:15:19.620 "num_base_bdevs": 4, 00:15:19.620 "num_base_bdevs_discovered": 1, 00:15:19.620 "num_base_bdevs_operational": 4, 00:15:19.620 "base_bdevs_list": [ 00:15:19.620 { 00:15:19.620 "name": "pt1", 00:15:19.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.620 "is_configured": true, 00:15:19.620 "data_offset": 2048, 00:15:19.620 "data_size": 63488 00:15:19.620 }, 00:15:19.620 { 00:15:19.620 "name": null, 00:15:19.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.620 "is_configured": false, 00:15:19.620 "data_offset": 0, 00:15:19.620 "data_size": 63488 00:15:19.620 }, 00:15:19.620 { 00:15:19.620 "name": null, 00:15:19.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.620 "is_configured": false, 00:15:19.620 "data_offset": 2048, 00:15:19.620 "data_size": 63488 00:15:19.620 }, 00:15:19.620 { 00:15:19.620 "name": null, 00:15:19.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:19.620 "is_configured": false, 00:15:19.620 "data_offset": 2048, 00:15:19.620 "data_size": 63488 00:15:19.620 } 00:15:19.620 ] 00:15:19.620 }' 00:15:19.620 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.620 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.188 [2024-11-27 14:14:50.484880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:20.188 [2024-11-27 14:14:50.484970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.188 [2024-11-27 14:14:50.485003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:20.188 [2024-11-27 14:14:50.485019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.188 [2024-11-27 14:14:50.485628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.188 [2024-11-27 14:14:50.485655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:20.188 [2024-11-27 14:14:50.485758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:20.188 [2024-11-27 14:14:50.485790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.188 pt2 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.188 [2024-11-27 14:14:50.492787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:20.188 [2024-11-27 14:14:50.492852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.188 [2024-11-27 14:14:50.492881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:20.188 [2024-11-27 14:14:50.492894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.188 [2024-11-27 14:14:50.493342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.188 [2024-11-27 14:14:50.493374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:20.188 [2024-11-27 14:14:50.493454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:20.188 [2024-11-27 14:14:50.493489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.188 pt3 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.188 [2024-11-27 14:14:50.500775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:20.188 [2024-11-27 14:14:50.500841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.188 [2024-11-27 14:14:50.500871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:20.188 [2024-11-27 14:14:50.500886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.188 [2024-11-27 14:14:50.501403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.188 [2024-11-27 14:14:50.501446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:20.188 [2024-11-27 14:14:50.501567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:20.188 [2024-11-27 14:14:50.501616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:20.188 [2024-11-27 14:14:50.501896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:20.188 [2024-11-27 14:14:50.501933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:20.188 [2024-11-27 14:14:50.502361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:20.188 [2024-11-27 14:14:50.502574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:20.188 [2024-11-27 14:14:50.502607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:20.188 [2024-11-27 14:14:50.502783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.188 pt4 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.188 "name": "raid_bdev1", 00:15:20.188 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:20.188 "strip_size_kb": 64, 00:15:20.188 "state": "online", 00:15:20.188 "raid_level": "concat", 00:15:20.188 "superblock": true, 00:15:20.188 "num_base_bdevs": 4, 00:15:20.188 "num_base_bdevs_discovered": 4, 00:15:20.188 "num_base_bdevs_operational": 4, 00:15:20.188 "base_bdevs_list": [ 00:15:20.188 { 00:15:20.188 "name": "pt1", 00:15:20.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.188 "is_configured": true, 00:15:20.188 "data_offset": 2048, 00:15:20.188 "data_size": 63488 00:15:20.188 }, 00:15:20.188 { 00:15:20.188 "name": "pt2", 00:15:20.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.188 "is_configured": true, 00:15:20.188 "data_offset": 2048, 00:15:20.188 "data_size": 63488 00:15:20.188 }, 00:15:20.188 { 00:15:20.188 "name": "pt3", 00:15:20.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.188 "is_configured": true, 00:15:20.188 "data_offset": 2048, 00:15:20.188 "data_size": 63488 00:15:20.188 }, 00:15:20.188 { 00:15:20.188 "name": "pt4", 00:15:20.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.188 "is_configured": true, 00:15:20.188 "data_offset": 2048, 00:15:20.188 "data_size": 63488 00:15:20.188 } 00:15:20.188 ] 00:15:20.188 }' 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.188 14:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.755 [2024-11-27 14:14:51.025433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:20.755 "name": "raid_bdev1", 00:15:20.755 "aliases": [ 00:15:20.755 "6ae3aba7-243a-4ab4-afc7-184f51af44c9" 00:15:20.755 ], 00:15:20.755 "product_name": "Raid Volume", 00:15:20.755 "block_size": 512, 00:15:20.755 "num_blocks": 253952, 00:15:20.755 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:20.755 "assigned_rate_limits": { 00:15:20.755 "rw_ios_per_sec": 0, 00:15:20.755 "rw_mbytes_per_sec": 0, 00:15:20.755 "r_mbytes_per_sec": 0, 00:15:20.755 "w_mbytes_per_sec": 0 00:15:20.755 }, 00:15:20.755 "claimed": false, 00:15:20.755 "zoned": false, 00:15:20.755 "supported_io_types": { 00:15:20.755 "read": true, 00:15:20.755 "write": true, 00:15:20.755 "unmap": true, 00:15:20.755 "flush": true, 00:15:20.755 "reset": true, 00:15:20.755 "nvme_admin": false, 00:15:20.755 "nvme_io": false, 00:15:20.755 "nvme_io_md": false, 00:15:20.755 "write_zeroes": true, 00:15:20.755 "zcopy": false, 00:15:20.755 "get_zone_info": false, 00:15:20.755 "zone_management": false, 00:15:20.755 "zone_append": false, 00:15:20.755 "compare": false, 00:15:20.755 "compare_and_write": false, 00:15:20.755 "abort": false, 00:15:20.755 "seek_hole": false, 00:15:20.755 "seek_data": false, 00:15:20.755 "copy": false, 00:15:20.755 "nvme_iov_md": false 00:15:20.755 }, 00:15:20.755 "memory_domains": [ 00:15:20.755 { 00:15:20.755 "dma_device_id": "system", 00:15:20.755 "dma_device_type": 1 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.755 "dma_device_type": 2 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "system", 00:15:20.755 "dma_device_type": 1 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.755 "dma_device_type": 2 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "system", 00:15:20.755 "dma_device_type": 1 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.755 "dma_device_type": 2 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "system", 00:15:20.755 "dma_device_type": 1 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.755 "dma_device_type": 2 00:15:20.755 } 00:15:20.755 ], 00:15:20.755 "driver_specific": { 00:15:20.755 "raid": { 00:15:20.755 "uuid": "6ae3aba7-243a-4ab4-afc7-184f51af44c9", 00:15:20.755 "strip_size_kb": 64, 00:15:20.755 "state": "online", 00:15:20.755 "raid_level": "concat", 00:15:20.755 "superblock": true, 00:15:20.755 "num_base_bdevs": 4, 00:15:20.755 "num_base_bdevs_discovered": 4, 00:15:20.755 "num_base_bdevs_operational": 4, 00:15:20.755 "base_bdevs_list": [ 00:15:20.755 { 00:15:20.755 "name": "pt1", 00:15:20.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.755 "is_configured": true, 00:15:20.755 "data_offset": 2048, 00:15:20.755 "data_size": 63488 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "name": "pt2", 00:15:20.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.755 "is_configured": true, 00:15:20.755 "data_offset": 2048, 00:15:20.755 "data_size": 63488 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "name": "pt3", 00:15:20.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.755 "is_configured": true, 00:15:20.755 "data_offset": 2048, 00:15:20.755 "data_size": 63488 00:15:20.755 }, 00:15:20.755 { 00:15:20.755 "name": "pt4", 00:15:20.755 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:20.755 "is_configured": true, 00:15:20.755 "data_offset": 2048, 00:15:20.755 "data_size": 63488 00:15:20.755 } 00:15:20.755 ] 00:15:20.755 } 00:15:20.755 } 00:15:20.755 }' 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:20.755 pt2 00:15:20.755 pt3 00:15:20.755 pt4' 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.755 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.756 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.014 [2024-11-27 14:14:51.381464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6ae3aba7-243a-4ab4-afc7-184f51af44c9 '!=' 6ae3aba7-243a-4ab4-afc7-184f51af44c9 ']' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72946 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72946 ']' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72946 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72946 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.014 killing process with pid 72946 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72946' 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72946 00:15:21.014 [2024-11-27 14:14:51.458757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.014 [2024-11-27 14:14:51.458875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.014 14:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72946 00:15:21.014 [2024-11-27 14:14:51.458976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.015 [2024-11-27 14:14:51.458991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:21.582 [2024-11-27 14:14:51.786727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.520 14:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:22.520 00:15:22.520 real 0m5.924s 00:15:22.520 user 0m8.955s 00:15:22.520 sys 0m0.912s 00:15:22.520 14:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.520 ************************************ 00:15:22.520 END TEST raid_superblock_test 00:15:22.520 ************************************ 00:15:22.520 14:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.520 14:14:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:22.520 14:14:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.520 14:14:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.520 14:14:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.520 ************************************ 00:15:22.520 START TEST raid_read_error_test 00:15:22.520 ************************************ 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wGr1RmlUyA 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73211 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73211 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73211 ']' 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.520 14:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.520 [2024-11-27 14:14:52.958523] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:22.520 [2024-11-27 14:14:52.958676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:15:22.811 [2024-11-27 14:14:53.133172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.811 [2024-11-27 14:14:53.282075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.090 [2024-11-27 14:14:53.500671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.090 [2024-11-27 14:14:53.500763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.658 14:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.658 14:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.658 14:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.658 14:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:23.658 14:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 BaseBdev1_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 true 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 [2024-11-27 14:14:54.031028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:23.658 [2024-11-27 14:14:54.031107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.658 [2024-11-27 14:14:54.031135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:23.658 [2024-11-27 14:14:54.031152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.658 [2024-11-27 14:14:54.033778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.658 [2024-11-27 14:14:54.033842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.658 BaseBdev1 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 BaseBdev2_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 true 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 [2024-11-27 14:14:54.086588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:23.658 [2024-11-27 14:14:54.086685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.658 [2024-11-27 14:14:54.086712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:23.658 [2024-11-27 14:14:54.086730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.658 [2024-11-27 14:14:54.089659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.658 [2024-11-27 14:14:54.089699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.658 BaseBdev2 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 BaseBdev3_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 true 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.658 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.658 [2024-11-27 14:14:54.168166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:23.658 [2024-11-27 14:14:54.168242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.658 [2024-11-27 14:14:54.168270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:23.658 [2024-11-27 14:14:54.168288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.918 [2024-11-27 14:14:54.172286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.918 [2024-11-27 14:14:54.172340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:23.918 BaseBdev3 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.918 BaseBdev4_malloc 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.918 true 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.918 [2024-11-27 14:14:54.227401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:23.918 [2024-11-27 14:14:54.227536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.918 [2024-11-27 14:14:54.227596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:23.918 [2024-11-27 14:14:54.227613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.918 [2024-11-27 14:14:54.230819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.918 [2024-11-27 14:14:54.230878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:23.918 BaseBdev4 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.918 [2024-11-27 14:14:54.235546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.918 [2024-11-27 14:14:54.238395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.918 [2024-11-27 14:14:54.238495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.918 [2024-11-27 14:14:54.238592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.918 [2024-11-27 14:14:54.238948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:23.918 [2024-11-27 14:14:54.238979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:23.918 [2024-11-27 14:14:54.239346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:23.918 [2024-11-27 14:14:54.239551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:23.918 [2024-11-27 14:14:54.239575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:23.918 [2024-11-27 14:14:54.239831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.918 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.918 "name": "raid_bdev1", 00:15:23.918 "uuid": "54c174f9-2671-4b53-8760-8cee717290c2", 00:15:23.918 "strip_size_kb": 64, 00:15:23.918 "state": "online", 00:15:23.918 "raid_level": "concat", 00:15:23.918 "superblock": true, 00:15:23.918 "num_base_bdevs": 4, 00:15:23.918 "num_base_bdevs_discovered": 4, 00:15:23.918 "num_base_bdevs_operational": 4, 00:15:23.918 "base_bdevs_list": [ 00:15:23.918 { 00:15:23.918 "name": "BaseBdev1", 00:15:23.918 "uuid": "a3b8608b-b271-5524-8d97-578665d4a886", 00:15:23.918 "is_configured": true, 00:15:23.918 "data_offset": 2048, 00:15:23.918 "data_size": 63488 00:15:23.918 }, 00:15:23.918 { 00:15:23.918 "name": "BaseBdev2", 00:15:23.918 "uuid": "e80f067f-6797-5d59-9639-9f3044cda293", 00:15:23.918 "is_configured": true, 00:15:23.918 "data_offset": 2048, 00:15:23.918 "data_size": 63488 00:15:23.918 }, 00:15:23.918 { 00:15:23.918 "name": "BaseBdev3", 00:15:23.918 "uuid": "f9fc1931-e0fa-5c61-8cd1-d8c5a279b671", 00:15:23.918 "is_configured": true, 00:15:23.918 "data_offset": 2048, 00:15:23.918 "data_size": 63488 00:15:23.918 }, 00:15:23.918 { 00:15:23.918 "name": "BaseBdev4", 00:15:23.918 "uuid": "188deed4-30f4-5389-863e-618262a7ef8b", 00:15:23.918 "is_configured": true, 00:15:23.918 "data_offset": 2048, 00:15:23.919 "data_size": 63488 00:15:23.919 } 00:15:23.919 ] 00:15:23.919 }' 00:15:23.919 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.919 14:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.485 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:24.485 14:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:24.485 [2024-11-27 14:14:54.926571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:25.417 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.418 "name": "raid_bdev1", 00:15:25.418 "uuid": "54c174f9-2671-4b53-8760-8cee717290c2", 00:15:25.418 "strip_size_kb": 64, 00:15:25.418 "state": "online", 00:15:25.418 "raid_level": "concat", 00:15:25.418 "superblock": true, 00:15:25.418 "num_base_bdevs": 4, 00:15:25.418 "num_base_bdevs_discovered": 4, 00:15:25.418 "num_base_bdevs_operational": 4, 00:15:25.418 "base_bdevs_list": [ 00:15:25.418 { 00:15:25.418 "name": "BaseBdev1", 00:15:25.418 "uuid": "a3b8608b-b271-5524-8d97-578665d4a886", 00:15:25.418 "is_configured": true, 00:15:25.418 "data_offset": 2048, 00:15:25.418 "data_size": 63488 00:15:25.418 }, 00:15:25.418 { 00:15:25.418 "name": "BaseBdev2", 00:15:25.418 "uuid": "e80f067f-6797-5d59-9639-9f3044cda293", 00:15:25.418 "is_configured": true, 00:15:25.418 "data_offset": 2048, 00:15:25.418 "data_size": 63488 00:15:25.418 }, 00:15:25.418 { 00:15:25.418 "name": "BaseBdev3", 00:15:25.418 "uuid": "f9fc1931-e0fa-5c61-8cd1-d8c5a279b671", 00:15:25.418 "is_configured": true, 00:15:25.418 "data_offset": 2048, 00:15:25.418 "data_size": 63488 00:15:25.418 }, 00:15:25.418 { 00:15:25.418 "name": "BaseBdev4", 00:15:25.418 "uuid": "188deed4-30f4-5389-863e-618262a7ef8b", 00:15:25.418 "is_configured": true, 00:15:25.418 "data_offset": 2048, 00:15:25.418 "data_size": 63488 00:15:25.418 } 00:15:25.418 ] 00:15:25.418 }' 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.418 14:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.983 [2024-11-27 14:14:56.343602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.983 [2024-11-27 14:14:56.343681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.983 [2024-11-27 14:14:56.347157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.983 [2024-11-27 14:14:56.347238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.983 [2024-11-27 14:14:56.347307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.983 [2024-11-27 14:14:56.347327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:25.983 { 00:15:25.983 "results": [ 00:15:25.983 { 00:15:25.983 "job": "raid_bdev1", 00:15:25.983 "core_mask": "0x1", 00:15:25.983 "workload": "randrw", 00:15:25.983 "percentage": 50, 00:15:25.983 "status": "finished", 00:15:25.983 "queue_depth": 1, 00:15:25.983 "io_size": 131072, 00:15:25.983 "runtime": 1.411783, 00:15:25.983 "iops": 9798.956355190565, 00:15:25.983 "mibps": 1224.8695443988206, 00:15:25.983 "io_failed": 1, 00:15:25.983 "io_timeout": 0, 00:15:25.983 "avg_latency_us": 143.17605545881656, 00:15:25.983 "min_latency_us": 44.21818181818182, 00:15:25.983 "max_latency_us": 1854.370909090909 00:15:25.983 } 00:15:25.983 ], 00:15:25.983 "core_count": 1 00:15:25.983 } 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73211 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73211 ']' 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73211 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73211 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73211' 00:15:25.983 killing process with pid 73211 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73211 00:15:25.983 [2024-11-27 14:14:56.384273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.983 14:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73211 00:15:26.241 [2024-11-27 14:14:56.694514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wGr1RmlUyA 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:27.618 00:15:27.618 real 0m5.043s 00:15:27.618 user 0m6.181s 00:15:27.618 sys 0m0.655s 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.618 14:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.618 ************************************ 00:15:27.618 END TEST raid_read_error_test 00:15:27.618 ************************************ 00:15:27.618 14:14:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:27.618 14:14:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:27.618 14:14:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.618 14:14:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.618 ************************************ 00:15:27.618 START TEST raid_write_error_test 00:15:27.618 ************************************ 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SIvXrrgRoD 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73362 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73362 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73362 ']' 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.618 14:14:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.618 [2024-11-27 14:14:58.073147] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:27.618 [2024-11-27 14:14:58.073317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73362 ] 00:15:27.876 [2024-11-27 14:14:58.263158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.134 [2024-11-27 14:14:58.432524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.392 [2024-11-27 14:14:58.656238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.392 [2024-11-27 14:14:58.656306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.651 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.651 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:28.651 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.651 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.651 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.651 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 BaseBdev1_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 true 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-11-27 14:14:59.192837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:28.908 [2024-11-27 14:14:59.192928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.908 [2024-11-27 14:14:59.192963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:28.908 [2024-11-27 14:14:59.192983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.908 [2024-11-27 14:14:59.195937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.908 [2024-11-27 14:14:59.195983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.908 BaseBdev1 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 BaseBdev2_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 true 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-11-27 14:14:59.256765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:28.908 [2024-11-27 14:14:59.256869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.908 [2024-11-27 14:14:59.256897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:28.908 [2024-11-27 14:14:59.256915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.908 [2024-11-27 14:14:59.259865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.908 [2024-11-27 14:14:59.259910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:28.908 BaseBdev2 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 BaseBdev3_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 true 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-11-27 14:14:59.332078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:28.908 [2024-11-27 14:14:59.332180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.908 [2024-11-27 14:14:59.332210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:28.908 [2024-11-27 14:14:59.332229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.908 [2024-11-27 14:14:59.335212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.908 [2024-11-27 14:14:59.335256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:28.908 BaseBdev3 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 BaseBdev4_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 true 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-11-27 14:14:59.392411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:28.908 [2024-11-27 14:14:59.392502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.908 [2024-11-27 14:14:59.392531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:28.908 [2024-11-27 14:14:59.392551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.908 [2024-11-27 14:14:59.395504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.908 [2024-11-27 14:14:59.395552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:28.908 BaseBdev4 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.908 [2024-11-27 14:14:59.400507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.908 [2024-11-27 14:14:59.403115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.908 [2024-11-27 14:14:59.403223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.908 [2024-11-27 14:14:59.403322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.908 [2024-11-27 14:14:59.403618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:28.908 [2024-11-27 14:14:59.403648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:28.908 [2024-11-27 14:14:59.403984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:28.908 [2024-11-27 14:14:59.404212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:28.908 [2024-11-27 14:14:59.404238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:28.908 [2024-11-27 14:14:59.404483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.908 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.168 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.168 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.168 "name": "raid_bdev1", 00:15:29.168 "uuid": "2d68b5ff-2448-450f-95b1-9b3e968d874c", 00:15:29.168 "strip_size_kb": 64, 00:15:29.168 "state": "online", 00:15:29.168 "raid_level": "concat", 00:15:29.168 "superblock": true, 00:15:29.168 "num_base_bdevs": 4, 00:15:29.168 "num_base_bdevs_discovered": 4, 00:15:29.168 "num_base_bdevs_operational": 4, 00:15:29.168 "base_bdevs_list": [ 00:15:29.168 { 00:15:29.168 "name": "BaseBdev1", 00:15:29.168 "uuid": "9b30950f-3d8d-521c-9cd5-4b14cfbb0600", 00:15:29.168 "is_configured": true, 00:15:29.168 "data_offset": 2048, 00:15:29.168 "data_size": 63488 00:15:29.168 }, 00:15:29.168 { 00:15:29.168 "name": "BaseBdev2", 00:15:29.168 "uuid": "0d70549c-30e5-52d6-ab8d-411a3cd3a6bb", 00:15:29.168 "is_configured": true, 00:15:29.168 "data_offset": 2048, 00:15:29.168 "data_size": 63488 00:15:29.168 }, 00:15:29.168 { 00:15:29.168 "name": "BaseBdev3", 00:15:29.168 "uuid": "b40e9292-378b-5869-b6b5-32ba4814d303", 00:15:29.168 "is_configured": true, 00:15:29.168 "data_offset": 2048, 00:15:29.168 "data_size": 63488 00:15:29.168 }, 00:15:29.168 { 00:15:29.168 "name": "BaseBdev4", 00:15:29.168 "uuid": "6b60d70c-46ba-5805-ab25-8f14a6d5074d", 00:15:29.168 "is_configured": true, 00:15:29.168 "data_offset": 2048, 00:15:29.168 "data_size": 63488 00:15:29.168 } 00:15:29.168 ] 00:15:29.168 }' 00:15:29.168 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.168 14:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.433 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:29.433 14:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:29.690 [2024-11-27 14:15:00.018325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.623 "name": "raid_bdev1", 00:15:30.623 "uuid": "2d68b5ff-2448-450f-95b1-9b3e968d874c", 00:15:30.623 "strip_size_kb": 64, 00:15:30.623 "state": "online", 00:15:30.623 "raid_level": "concat", 00:15:30.623 "superblock": true, 00:15:30.623 "num_base_bdevs": 4, 00:15:30.623 "num_base_bdevs_discovered": 4, 00:15:30.623 "num_base_bdevs_operational": 4, 00:15:30.623 "base_bdevs_list": [ 00:15:30.623 { 00:15:30.623 "name": "BaseBdev1", 00:15:30.623 "uuid": "9b30950f-3d8d-521c-9cd5-4b14cfbb0600", 00:15:30.623 "is_configured": true, 00:15:30.623 "data_offset": 2048, 00:15:30.623 "data_size": 63488 00:15:30.623 }, 00:15:30.623 { 00:15:30.623 "name": "BaseBdev2", 00:15:30.623 "uuid": "0d70549c-30e5-52d6-ab8d-411a3cd3a6bb", 00:15:30.623 "is_configured": true, 00:15:30.623 "data_offset": 2048, 00:15:30.623 "data_size": 63488 00:15:30.623 }, 00:15:30.623 { 00:15:30.623 "name": "BaseBdev3", 00:15:30.623 "uuid": "b40e9292-378b-5869-b6b5-32ba4814d303", 00:15:30.623 "is_configured": true, 00:15:30.623 "data_offset": 2048, 00:15:30.623 "data_size": 63488 00:15:30.623 }, 00:15:30.623 { 00:15:30.623 "name": "BaseBdev4", 00:15:30.623 "uuid": "6b60d70c-46ba-5805-ab25-8f14a6d5074d", 00:15:30.623 "is_configured": true, 00:15:30.623 "data_offset": 2048, 00:15:30.623 "data_size": 63488 00:15:30.623 } 00:15:30.623 ] 00:15:30.623 }' 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.623 14:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.194 [2024-11-27 14:15:01.436402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.194 [2024-11-27 14:15:01.436479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.194 [2024-11-27 14:15:01.440249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.194 [2024-11-27 14:15:01.440525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.194 [2024-11-27 14:15:01.440710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.194 [2024-11-27 14:15:01.440884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:31.194 { 00:15:31.194 "results": [ 00:15:31.194 { 00:15:31.194 "job": "raid_bdev1", 00:15:31.194 "core_mask": "0x1", 00:15:31.194 "workload": "randrw", 00:15:31.194 "percentage": 50, 00:15:31.194 "status": "finished", 00:15:31.194 "queue_depth": 1, 00:15:31.194 "io_size": 131072, 00:15:31.194 "runtime": 1.415497, 00:15:31.194 "iops": 9741.454768183896, 00:15:31.194 "mibps": 1217.681846022987, 00:15:31.194 "io_failed": 1, 00:15:31.194 "io_timeout": 0, 00:15:31.194 "avg_latency_us": 143.97913217746722, 00:15:31.194 "min_latency_us": 43.52, 00:15:31.194 "max_latency_us": 1861.8181818181818 00:15:31.194 } 00:15:31.194 ], 00:15:31.194 "core_count": 1 00:15:31.194 } 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73362 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73362 ']' 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73362 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73362 00:15:31.194 killing process with pid 73362 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73362' 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73362 00:15:31.194 [2024-11-27 14:15:01.484697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.194 14:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73362 00:15:31.452 [2024-11-27 14:15:01.799743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.826 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SIvXrrgRoD 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:32.827 00:15:32.827 real 0m5.036s 00:15:32.827 user 0m6.109s 00:15:32.827 sys 0m0.686s 00:15:32.827 ************************************ 00:15:32.827 END TEST raid_write_error_test 00:15:32.827 ************************************ 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.827 14:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.827 14:15:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:32.827 14:15:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:32.827 14:15:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.827 14:15:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.827 14:15:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.827 ************************************ 00:15:32.827 START TEST raid_state_function_test 00:15:32.827 ************************************ 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:32.827 Process raid pid: 73506 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73506 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73506' 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73506 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73506 ']' 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.827 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.827 [2024-11-27 14:15:03.140669] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:32.827 [2024-11-27 14:15:03.140864] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.827 [2024-11-27 14:15:03.317734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.085 [2024-11-27 14:15:03.466958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.343 [2024-11-27 14:15:03.695562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.343 [2024-11-27 14:15:03.695632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.910 [2024-11-27 14:15:04.123615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.910 [2024-11-27 14:15:04.123974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.910 [2024-11-27 14:15:04.124122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.910 [2024-11-27 14:15:04.124198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.910 [2024-11-27 14:15:04.124364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.910 [2024-11-27 14:15:04.124422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.910 [2024-11-27 14:15:04.124552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.910 [2024-11-27 14:15:04.124591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.910 "name": "Existed_Raid", 00:15:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.910 "strip_size_kb": 0, 00:15:33.910 "state": "configuring", 00:15:33.910 "raid_level": "raid1", 00:15:33.910 "superblock": false, 00:15:33.910 "num_base_bdevs": 4, 00:15:33.910 "num_base_bdevs_discovered": 0, 00:15:33.910 "num_base_bdevs_operational": 4, 00:15:33.910 "base_bdevs_list": [ 00:15:33.910 { 00:15:33.910 "name": "BaseBdev1", 00:15:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.910 "is_configured": false, 00:15:33.910 "data_offset": 0, 00:15:33.910 "data_size": 0 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "name": "BaseBdev2", 00:15:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.910 "is_configured": false, 00:15:33.910 "data_offset": 0, 00:15:33.910 "data_size": 0 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "name": "BaseBdev3", 00:15:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.910 "is_configured": false, 00:15:33.910 "data_offset": 0, 00:15:33.910 "data_size": 0 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "name": "BaseBdev4", 00:15:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.910 "is_configured": false, 00:15:33.910 "data_offset": 0, 00:15:33.910 "data_size": 0 00:15:33.910 } 00:15:33.910 ] 00:15:33.910 }' 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.910 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.169 [2024-11-27 14:15:04.595695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.169 [2024-11-27 14:15:04.595772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.169 [2024-11-27 14:15:04.603642] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.169 [2024-11-27 14:15:04.603962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.169 [2024-11-27 14:15:04.604134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.169 [2024-11-27 14:15:04.604179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.169 [2024-11-27 14:15:04.604191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.169 [2024-11-27 14:15:04.604206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.169 [2024-11-27 14:15:04.604216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.169 [2024-11-27 14:15:04.604231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.169 [2024-11-27 14:15:04.652966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.169 BaseBdev1 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.169 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.169 [ 00:15:34.169 { 00:15:34.169 "name": "BaseBdev1", 00:15:34.169 "aliases": [ 00:15:34.169 "d75be6ae-4534-403a-9d92-0146c8d25754" 00:15:34.169 ], 00:15:34.169 "product_name": "Malloc disk", 00:15:34.169 "block_size": 512, 00:15:34.169 "num_blocks": 65536, 00:15:34.169 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:34.169 "assigned_rate_limits": { 00:15:34.169 "rw_ios_per_sec": 0, 00:15:34.169 "rw_mbytes_per_sec": 0, 00:15:34.169 "r_mbytes_per_sec": 0, 00:15:34.169 "w_mbytes_per_sec": 0 00:15:34.169 }, 00:15:34.169 "claimed": true, 00:15:34.169 "claim_type": "exclusive_write", 00:15:34.169 "zoned": false, 00:15:34.169 "supported_io_types": { 00:15:34.169 "read": true, 00:15:34.169 "write": true, 00:15:34.169 "unmap": true, 00:15:34.169 "flush": true, 00:15:34.169 "reset": true, 00:15:34.169 "nvme_admin": false, 00:15:34.169 "nvme_io": false, 00:15:34.169 "nvme_io_md": false, 00:15:34.169 "write_zeroes": true, 00:15:34.169 "zcopy": true, 00:15:34.428 "get_zone_info": false, 00:15:34.428 "zone_management": false, 00:15:34.428 "zone_append": false, 00:15:34.428 "compare": false, 00:15:34.428 "compare_and_write": false, 00:15:34.428 "abort": true, 00:15:34.428 "seek_hole": false, 00:15:34.428 "seek_data": false, 00:15:34.428 "copy": true, 00:15:34.428 "nvme_iov_md": false 00:15:34.428 }, 00:15:34.428 "memory_domains": [ 00:15:34.428 { 00:15:34.428 "dma_device_id": "system", 00:15:34.428 "dma_device_type": 1 00:15:34.428 }, 00:15:34.428 { 00:15:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.428 "dma_device_type": 2 00:15:34.428 } 00:15:34.428 ], 00:15:34.428 "driver_specific": {} 00:15:34.428 } 00:15:34.428 ] 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.428 "name": "Existed_Raid", 00:15:34.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.428 "strip_size_kb": 0, 00:15:34.428 "state": "configuring", 00:15:34.428 "raid_level": "raid1", 00:15:34.428 "superblock": false, 00:15:34.428 "num_base_bdevs": 4, 00:15:34.428 "num_base_bdevs_discovered": 1, 00:15:34.428 "num_base_bdevs_operational": 4, 00:15:34.428 "base_bdevs_list": [ 00:15:34.428 { 00:15:34.428 "name": "BaseBdev1", 00:15:34.428 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:34.428 "is_configured": true, 00:15:34.428 "data_offset": 0, 00:15:34.428 "data_size": 65536 00:15:34.428 }, 00:15:34.428 { 00:15:34.428 "name": "BaseBdev2", 00:15:34.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.428 "is_configured": false, 00:15:34.428 "data_offset": 0, 00:15:34.428 "data_size": 0 00:15:34.428 }, 00:15:34.428 { 00:15:34.428 "name": "BaseBdev3", 00:15:34.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.428 "is_configured": false, 00:15:34.428 "data_offset": 0, 00:15:34.428 "data_size": 0 00:15:34.428 }, 00:15:34.428 { 00:15:34.428 "name": "BaseBdev4", 00:15:34.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.428 "is_configured": false, 00:15:34.428 "data_offset": 0, 00:15:34.428 "data_size": 0 00:15:34.428 } 00:15:34.428 ] 00:15:34.428 }' 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.428 14:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.687 [2024-11-27 14:15:05.185359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.687 [2024-11-27 14:15:05.185788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.687 [2024-11-27 14:15:05.193242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.687 [2024-11-27 14:15:05.196108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.687 [2024-11-27 14:15:05.196323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.687 [2024-11-27 14:15:05.196469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.687 [2024-11-27 14:15:05.196528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.687 [2024-11-27 14:15:05.196733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.687 [2024-11-27 14:15:05.196785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.687 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.946 "name": "Existed_Raid", 00:15:34.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.946 "strip_size_kb": 0, 00:15:34.946 "state": "configuring", 00:15:34.946 "raid_level": "raid1", 00:15:34.946 "superblock": false, 00:15:34.946 "num_base_bdevs": 4, 00:15:34.946 "num_base_bdevs_discovered": 1, 00:15:34.946 "num_base_bdevs_operational": 4, 00:15:34.946 "base_bdevs_list": [ 00:15:34.946 { 00:15:34.946 "name": "BaseBdev1", 00:15:34.946 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:34.946 "is_configured": true, 00:15:34.946 "data_offset": 0, 00:15:34.946 "data_size": 65536 00:15:34.946 }, 00:15:34.946 { 00:15:34.946 "name": "BaseBdev2", 00:15:34.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.946 "is_configured": false, 00:15:34.946 "data_offset": 0, 00:15:34.946 "data_size": 0 00:15:34.946 }, 00:15:34.946 { 00:15:34.946 "name": "BaseBdev3", 00:15:34.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.946 "is_configured": false, 00:15:34.946 "data_offset": 0, 00:15:34.946 "data_size": 0 00:15:34.946 }, 00:15:34.946 { 00:15:34.946 "name": "BaseBdev4", 00:15:34.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.946 "is_configured": false, 00:15:34.946 "data_offset": 0, 00:15:34.946 "data_size": 0 00:15:34.946 } 00:15:34.946 ] 00:15:34.946 }' 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.946 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.205 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.205 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.205 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.465 BaseBdev2 00:15:35.465 [2024-11-27 14:15:05.732987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.465 [ 00:15:35.465 { 00:15:35.465 "name": "BaseBdev2", 00:15:35.465 "aliases": [ 00:15:35.465 "835950b0-b855-4731-a2e4-910507c11359" 00:15:35.465 ], 00:15:35.465 "product_name": "Malloc disk", 00:15:35.465 "block_size": 512, 00:15:35.465 "num_blocks": 65536, 00:15:35.465 "uuid": "835950b0-b855-4731-a2e4-910507c11359", 00:15:35.465 "assigned_rate_limits": { 00:15:35.465 "rw_ios_per_sec": 0, 00:15:35.465 "rw_mbytes_per_sec": 0, 00:15:35.465 "r_mbytes_per_sec": 0, 00:15:35.465 "w_mbytes_per_sec": 0 00:15:35.465 }, 00:15:35.465 "claimed": true, 00:15:35.465 "claim_type": "exclusive_write", 00:15:35.465 "zoned": false, 00:15:35.465 "supported_io_types": { 00:15:35.465 "read": true, 00:15:35.465 "write": true, 00:15:35.465 "unmap": true, 00:15:35.465 "flush": true, 00:15:35.465 "reset": true, 00:15:35.465 "nvme_admin": false, 00:15:35.465 "nvme_io": false, 00:15:35.465 "nvme_io_md": false, 00:15:35.465 "write_zeroes": true, 00:15:35.465 "zcopy": true, 00:15:35.465 "get_zone_info": false, 00:15:35.465 "zone_management": false, 00:15:35.465 "zone_append": false, 00:15:35.465 "compare": false, 00:15:35.465 "compare_and_write": false, 00:15:35.465 "abort": true, 00:15:35.465 "seek_hole": false, 00:15:35.465 "seek_data": false, 00:15:35.465 "copy": true, 00:15:35.465 "nvme_iov_md": false 00:15:35.465 }, 00:15:35.465 "memory_domains": [ 00:15:35.465 { 00:15:35.465 "dma_device_id": "system", 00:15:35.465 "dma_device_type": 1 00:15:35.465 }, 00:15:35.465 { 00:15:35.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.465 "dma_device_type": 2 00:15:35.465 } 00:15:35.465 ], 00:15:35.465 "driver_specific": {} 00:15:35.465 } 00:15:35.465 ] 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.465 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.466 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.466 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.466 "name": "Existed_Raid", 00:15:35.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.466 "strip_size_kb": 0, 00:15:35.466 "state": "configuring", 00:15:35.466 "raid_level": "raid1", 00:15:35.466 "superblock": false, 00:15:35.466 "num_base_bdevs": 4, 00:15:35.466 "num_base_bdevs_discovered": 2, 00:15:35.466 "num_base_bdevs_operational": 4, 00:15:35.466 "base_bdevs_list": [ 00:15:35.466 { 00:15:35.466 "name": "BaseBdev1", 00:15:35.466 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:35.466 "is_configured": true, 00:15:35.466 "data_offset": 0, 00:15:35.466 "data_size": 65536 00:15:35.466 }, 00:15:35.466 { 00:15:35.466 "name": "BaseBdev2", 00:15:35.466 "uuid": "835950b0-b855-4731-a2e4-910507c11359", 00:15:35.466 "is_configured": true, 00:15:35.466 "data_offset": 0, 00:15:35.466 "data_size": 65536 00:15:35.466 }, 00:15:35.466 { 00:15:35.466 "name": "BaseBdev3", 00:15:35.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.466 "is_configured": false, 00:15:35.466 "data_offset": 0, 00:15:35.466 "data_size": 0 00:15:35.466 }, 00:15:35.466 { 00:15:35.466 "name": "BaseBdev4", 00:15:35.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.466 "is_configured": false, 00:15:35.466 "data_offset": 0, 00:15:35.466 "data_size": 0 00:15:35.466 } 00:15:35.466 ] 00:15:35.466 }' 00:15:35.466 14:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.466 14:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.034 [2024-11-27 14:15:06.344409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.034 BaseBdev3 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.034 [ 00:15:36.034 { 00:15:36.034 "name": "BaseBdev3", 00:15:36.034 "aliases": [ 00:15:36.034 "e8bb040c-da41-4d06-ba94-89ea3d2a6006" 00:15:36.034 ], 00:15:36.034 "product_name": "Malloc disk", 00:15:36.034 "block_size": 512, 00:15:36.034 "num_blocks": 65536, 00:15:36.034 "uuid": "e8bb040c-da41-4d06-ba94-89ea3d2a6006", 00:15:36.034 "assigned_rate_limits": { 00:15:36.034 "rw_ios_per_sec": 0, 00:15:36.034 "rw_mbytes_per_sec": 0, 00:15:36.034 "r_mbytes_per_sec": 0, 00:15:36.034 "w_mbytes_per_sec": 0 00:15:36.034 }, 00:15:36.034 "claimed": true, 00:15:36.034 "claim_type": "exclusive_write", 00:15:36.034 "zoned": false, 00:15:36.034 "supported_io_types": { 00:15:36.034 "read": true, 00:15:36.034 "write": true, 00:15:36.034 "unmap": true, 00:15:36.034 "flush": true, 00:15:36.034 "reset": true, 00:15:36.034 "nvme_admin": false, 00:15:36.034 "nvme_io": false, 00:15:36.034 "nvme_io_md": false, 00:15:36.034 "write_zeroes": true, 00:15:36.034 "zcopy": true, 00:15:36.034 "get_zone_info": false, 00:15:36.034 "zone_management": false, 00:15:36.034 "zone_append": false, 00:15:36.034 "compare": false, 00:15:36.034 "compare_and_write": false, 00:15:36.034 "abort": true, 00:15:36.034 "seek_hole": false, 00:15:36.034 "seek_data": false, 00:15:36.034 "copy": true, 00:15:36.034 "nvme_iov_md": false 00:15:36.034 }, 00:15:36.034 "memory_domains": [ 00:15:36.034 { 00:15:36.034 "dma_device_id": "system", 00:15:36.034 "dma_device_type": 1 00:15:36.034 }, 00:15:36.034 { 00:15:36.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.034 "dma_device_type": 2 00:15:36.034 } 00:15:36.034 ], 00:15:36.034 "driver_specific": {} 00:15:36.034 } 00:15:36.034 ] 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.034 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.034 "name": "Existed_Raid", 00:15:36.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.035 "strip_size_kb": 0, 00:15:36.035 "state": "configuring", 00:15:36.035 "raid_level": "raid1", 00:15:36.035 "superblock": false, 00:15:36.035 "num_base_bdevs": 4, 00:15:36.035 "num_base_bdevs_discovered": 3, 00:15:36.035 "num_base_bdevs_operational": 4, 00:15:36.035 "base_bdevs_list": [ 00:15:36.035 { 00:15:36.035 "name": "BaseBdev1", 00:15:36.035 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:36.035 "is_configured": true, 00:15:36.035 "data_offset": 0, 00:15:36.035 "data_size": 65536 00:15:36.035 }, 00:15:36.035 { 00:15:36.035 "name": "BaseBdev2", 00:15:36.035 "uuid": "835950b0-b855-4731-a2e4-910507c11359", 00:15:36.035 "is_configured": true, 00:15:36.035 "data_offset": 0, 00:15:36.035 "data_size": 65536 00:15:36.035 }, 00:15:36.035 { 00:15:36.035 "name": "BaseBdev3", 00:15:36.035 "uuid": "e8bb040c-da41-4d06-ba94-89ea3d2a6006", 00:15:36.035 "is_configured": true, 00:15:36.035 "data_offset": 0, 00:15:36.035 "data_size": 65536 00:15:36.035 }, 00:15:36.035 { 00:15:36.035 "name": "BaseBdev4", 00:15:36.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.035 "is_configured": false, 00:15:36.035 "data_offset": 0, 00:15:36.035 "data_size": 0 00:15:36.035 } 00:15:36.035 ] 00:15:36.035 }' 00:15:36.035 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.035 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.644 [2024-11-27 14:15:06.959609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.644 BaseBdev4 00:15:36.644 [2024-11-27 14:15:06.960047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.644 [2024-11-27 14:15:06.960072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:36.644 [2024-11-27 14:15:06.960581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.644 [2024-11-27 14:15:06.960906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.644 [2024-11-27 14:15:06.960960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:36.644 [2024-11-27 14:15:06.961324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.644 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.645 [ 00:15:36.645 { 00:15:36.645 "name": "BaseBdev4", 00:15:36.645 "aliases": [ 00:15:36.645 "ffd91b10-04b4-452c-a663-95e84c4081f2" 00:15:36.645 ], 00:15:36.645 "product_name": "Malloc disk", 00:15:36.645 "block_size": 512, 00:15:36.645 "num_blocks": 65536, 00:15:36.645 "uuid": "ffd91b10-04b4-452c-a663-95e84c4081f2", 00:15:36.645 "assigned_rate_limits": { 00:15:36.645 "rw_ios_per_sec": 0, 00:15:36.645 "rw_mbytes_per_sec": 0, 00:15:36.645 "r_mbytes_per_sec": 0, 00:15:36.645 "w_mbytes_per_sec": 0 00:15:36.645 }, 00:15:36.645 "claimed": true, 00:15:36.645 "claim_type": "exclusive_write", 00:15:36.645 "zoned": false, 00:15:36.645 "supported_io_types": { 00:15:36.645 "read": true, 00:15:36.645 "write": true, 00:15:36.645 "unmap": true, 00:15:36.645 "flush": true, 00:15:36.645 "reset": true, 00:15:36.645 "nvme_admin": false, 00:15:36.645 "nvme_io": false, 00:15:36.645 "nvme_io_md": false, 00:15:36.645 "write_zeroes": true, 00:15:36.645 "zcopy": true, 00:15:36.645 "get_zone_info": false, 00:15:36.645 "zone_management": false, 00:15:36.645 "zone_append": false, 00:15:36.645 "compare": false, 00:15:36.645 "compare_and_write": false, 00:15:36.645 "abort": true, 00:15:36.645 "seek_hole": false, 00:15:36.645 "seek_data": false, 00:15:36.645 "copy": true, 00:15:36.645 "nvme_iov_md": false 00:15:36.645 }, 00:15:36.645 "memory_domains": [ 00:15:36.645 { 00:15:36.645 "dma_device_id": "system", 00:15:36.645 "dma_device_type": 1 00:15:36.645 }, 00:15:36.645 { 00:15:36.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.645 "dma_device_type": 2 00:15:36.645 } 00:15:36.645 ], 00:15:36.645 "driver_specific": {} 00:15:36.645 } 00:15:36.645 ] 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.645 14:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.645 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.645 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.645 "name": "Existed_Raid", 00:15:36.645 "uuid": "8b1a9a59-9c98-4632-9b46-5b13acbe9b1f", 00:15:36.645 "strip_size_kb": 0, 00:15:36.645 "state": "online", 00:15:36.645 "raid_level": "raid1", 00:15:36.645 "superblock": false, 00:15:36.645 "num_base_bdevs": 4, 00:15:36.645 "num_base_bdevs_discovered": 4, 00:15:36.645 "num_base_bdevs_operational": 4, 00:15:36.645 "base_bdevs_list": [ 00:15:36.645 { 00:15:36.645 "name": "BaseBdev1", 00:15:36.645 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:36.645 "is_configured": true, 00:15:36.645 "data_offset": 0, 00:15:36.645 "data_size": 65536 00:15:36.645 }, 00:15:36.645 { 00:15:36.645 "name": "BaseBdev2", 00:15:36.645 "uuid": "835950b0-b855-4731-a2e4-910507c11359", 00:15:36.645 "is_configured": true, 00:15:36.645 "data_offset": 0, 00:15:36.645 "data_size": 65536 00:15:36.645 }, 00:15:36.645 { 00:15:36.645 "name": "BaseBdev3", 00:15:36.645 "uuid": "e8bb040c-da41-4d06-ba94-89ea3d2a6006", 00:15:36.645 "is_configured": true, 00:15:36.645 "data_offset": 0, 00:15:36.645 "data_size": 65536 00:15:36.645 }, 00:15:36.645 { 00:15:36.645 "name": "BaseBdev4", 00:15:36.645 "uuid": "ffd91b10-04b4-452c-a663-95e84c4081f2", 00:15:36.645 "is_configured": true, 00:15:36.645 "data_offset": 0, 00:15:36.645 "data_size": 65536 00:15:36.645 } 00:15:36.645 ] 00:15:36.645 }' 00:15:36.645 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.645 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 [2024-11-27 14:15:07.504325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.218 "name": "Existed_Raid", 00:15:37.218 "aliases": [ 00:15:37.218 "8b1a9a59-9c98-4632-9b46-5b13acbe9b1f" 00:15:37.218 ], 00:15:37.218 "product_name": "Raid Volume", 00:15:37.218 "block_size": 512, 00:15:37.218 "num_blocks": 65536, 00:15:37.218 "uuid": "8b1a9a59-9c98-4632-9b46-5b13acbe9b1f", 00:15:37.218 "assigned_rate_limits": { 00:15:37.218 "rw_ios_per_sec": 0, 00:15:37.218 "rw_mbytes_per_sec": 0, 00:15:37.218 "r_mbytes_per_sec": 0, 00:15:37.218 "w_mbytes_per_sec": 0 00:15:37.218 }, 00:15:37.218 "claimed": false, 00:15:37.218 "zoned": false, 00:15:37.218 "supported_io_types": { 00:15:37.218 "read": true, 00:15:37.218 "write": true, 00:15:37.218 "unmap": false, 00:15:37.218 "flush": false, 00:15:37.218 "reset": true, 00:15:37.218 "nvme_admin": false, 00:15:37.218 "nvme_io": false, 00:15:37.218 "nvme_io_md": false, 00:15:37.218 "write_zeroes": true, 00:15:37.218 "zcopy": false, 00:15:37.218 "get_zone_info": false, 00:15:37.218 "zone_management": false, 00:15:37.218 "zone_append": false, 00:15:37.218 "compare": false, 00:15:37.218 "compare_and_write": false, 00:15:37.218 "abort": false, 00:15:37.218 "seek_hole": false, 00:15:37.218 "seek_data": false, 00:15:37.218 "copy": false, 00:15:37.218 "nvme_iov_md": false 00:15:37.218 }, 00:15:37.218 "memory_domains": [ 00:15:37.218 { 00:15:37.218 "dma_device_id": "system", 00:15:37.218 "dma_device_type": 1 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.218 "dma_device_type": 2 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "system", 00:15:37.218 "dma_device_type": 1 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.218 "dma_device_type": 2 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "system", 00:15:37.218 "dma_device_type": 1 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.218 "dma_device_type": 2 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "system", 00:15:37.218 "dma_device_type": 1 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.218 "dma_device_type": 2 00:15:37.218 } 00:15:37.218 ], 00:15:37.218 "driver_specific": { 00:15:37.218 "raid": { 00:15:37.218 "uuid": "8b1a9a59-9c98-4632-9b46-5b13acbe9b1f", 00:15:37.218 "strip_size_kb": 0, 00:15:37.218 "state": "online", 00:15:37.218 "raid_level": "raid1", 00:15:37.218 "superblock": false, 00:15:37.218 "num_base_bdevs": 4, 00:15:37.218 "num_base_bdevs_discovered": 4, 00:15:37.218 "num_base_bdevs_operational": 4, 00:15:37.218 "base_bdevs_list": [ 00:15:37.218 { 00:15:37.218 "name": "BaseBdev1", 00:15:37.218 "uuid": "d75be6ae-4534-403a-9d92-0146c8d25754", 00:15:37.218 "is_configured": true, 00:15:37.218 "data_offset": 0, 00:15:37.218 "data_size": 65536 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "name": "BaseBdev2", 00:15:37.218 "uuid": "835950b0-b855-4731-a2e4-910507c11359", 00:15:37.218 "is_configured": true, 00:15:37.218 "data_offset": 0, 00:15:37.218 "data_size": 65536 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "name": "BaseBdev3", 00:15:37.218 "uuid": "e8bb040c-da41-4d06-ba94-89ea3d2a6006", 00:15:37.218 "is_configured": true, 00:15:37.218 "data_offset": 0, 00:15:37.218 "data_size": 65536 00:15:37.218 }, 00:15:37.218 { 00:15:37.218 "name": "BaseBdev4", 00:15:37.218 "uuid": "ffd91b10-04b4-452c-a663-95e84c4081f2", 00:15:37.218 "is_configured": true, 00:15:37.218 "data_offset": 0, 00:15:37.218 "data_size": 65536 00:15:37.218 } 00:15:37.218 ] 00:15:37.218 } 00:15:37.218 } 00:15:37.218 }' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:37.218 BaseBdev2 00:15:37.218 BaseBdev3 00:15:37.218 BaseBdev4' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.478 [2024-11-27 14:15:07.836085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.478 "name": "Existed_Raid", 00:15:37.478 "uuid": "8b1a9a59-9c98-4632-9b46-5b13acbe9b1f", 00:15:37.478 "strip_size_kb": 0, 00:15:37.478 "state": "online", 00:15:37.478 "raid_level": "raid1", 00:15:37.478 "superblock": false, 00:15:37.478 "num_base_bdevs": 4, 00:15:37.478 "num_base_bdevs_discovered": 3, 00:15:37.478 "num_base_bdevs_operational": 3, 00:15:37.478 "base_bdevs_list": [ 00:15:37.478 { 00:15:37.478 "name": null, 00:15:37.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.478 "is_configured": false, 00:15:37.478 "data_offset": 0, 00:15:37.478 "data_size": 65536 00:15:37.478 }, 00:15:37.478 { 00:15:37.478 "name": "BaseBdev2", 00:15:37.478 "uuid": "835950b0-b855-4731-a2e4-910507c11359", 00:15:37.478 "is_configured": true, 00:15:37.478 "data_offset": 0, 00:15:37.478 "data_size": 65536 00:15:37.478 }, 00:15:37.478 { 00:15:37.478 "name": "BaseBdev3", 00:15:37.478 "uuid": "e8bb040c-da41-4d06-ba94-89ea3d2a6006", 00:15:37.478 "is_configured": true, 00:15:37.478 "data_offset": 0, 00:15:37.478 "data_size": 65536 00:15:37.478 }, 00:15:37.478 { 00:15:37.478 "name": "BaseBdev4", 00:15:37.478 "uuid": "ffd91b10-04b4-452c-a663-95e84c4081f2", 00:15:37.478 "is_configured": true, 00:15:37.478 "data_offset": 0, 00:15:37.478 "data_size": 65536 00:15:37.478 } 00:15:37.478 ] 00:15:37.478 }' 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.478 14:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.045 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.046 [2024-11-27 14:15:08.481017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.304 [2024-11-27 14:15:08.636587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.304 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.304 [2024-11-27 14:15:08.784355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:38.304 [2024-11-27 14:15:08.784769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.563 [2024-11-27 14:15:08.875532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.563 [2024-11-27 14:15:08.875857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.563 [2024-11-27 14:15:08.875893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.563 BaseBdev2 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.563 14:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.563 [ 00:15:38.563 { 00:15:38.563 "name": "BaseBdev2", 00:15:38.563 "aliases": [ 00:15:38.563 "7d47f0ba-81c1-4514-809e-6eee106cad99" 00:15:38.563 ], 00:15:38.563 "product_name": "Malloc disk", 00:15:38.563 "block_size": 512, 00:15:38.563 "num_blocks": 65536, 00:15:38.563 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:38.563 "assigned_rate_limits": { 00:15:38.563 "rw_ios_per_sec": 0, 00:15:38.563 "rw_mbytes_per_sec": 0, 00:15:38.563 "r_mbytes_per_sec": 0, 00:15:38.563 "w_mbytes_per_sec": 0 00:15:38.563 }, 00:15:38.563 "claimed": false, 00:15:38.563 "zoned": false, 00:15:38.563 "supported_io_types": { 00:15:38.563 "read": true, 00:15:38.563 "write": true, 00:15:38.563 "unmap": true, 00:15:38.563 "flush": true, 00:15:38.563 "reset": true, 00:15:38.563 "nvme_admin": false, 00:15:38.563 "nvme_io": false, 00:15:38.563 "nvme_io_md": false, 00:15:38.563 "write_zeroes": true, 00:15:38.563 "zcopy": true, 00:15:38.563 "get_zone_info": false, 00:15:38.563 "zone_management": false, 00:15:38.563 "zone_append": false, 00:15:38.563 "compare": false, 00:15:38.563 "compare_and_write": false, 00:15:38.563 "abort": true, 00:15:38.563 "seek_hole": false, 00:15:38.563 "seek_data": false, 00:15:38.563 "copy": true, 00:15:38.563 "nvme_iov_md": false 00:15:38.563 }, 00:15:38.564 "memory_domains": [ 00:15:38.564 { 00:15:38.564 "dma_device_id": "system", 00:15:38.564 "dma_device_type": 1 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.564 "dma_device_type": 2 00:15:38.564 } 00:15:38.564 ], 00:15:38.564 "driver_specific": {} 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 BaseBdev3 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.564 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.833 [ 00:15:38.833 { 00:15:38.833 "name": "BaseBdev3", 00:15:38.833 "aliases": [ 00:15:38.833 "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f" 00:15:38.833 ], 00:15:38.833 "product_name": "Malloc disk", 00:15:38.833 "block_size": 512, 00:15:38.833 "num_blocks": 65536, 00:15:38.833 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:38.833 "assigned_rate_limits": { 00:15:38.833 "rw_ios_per_sec": 0, 00:15:38.833 "rw_mbytes_per_sec": 0, 00:15:38.833 "r_mbytes_per_sec": 0, 00:15:38.833 "w_mbytes_per_sec": 0 00:15:38.833 }, 00:15:38.833 "claimed": false, 00:15:38.833 "zoned": false, 00:15:38.833 "supported_io_types": { 00:15:38.833 "read": true, 00:15:38.834 "write": true, 00:15:38.834 "unmap": true, 00:15:38.834 "flush": true, 00:15:38.834 "reset": true, 00:15:38.834 "nvme_admin": false, 00:15:38.834 "nvme_io": false, 00:15:38.834 "nvme_io_md": false, 00:15:38.834 "write_zeroes": true, 00:15:38.834 "zcopy": true, 00:15:38.834 "get_zone_info": false, 00:15:38.834 "zone_management": false, 00:15:38.834 "zone_append": false, 00:15:38.834 "compare": false, 00:15:38.834 "compare_and_write": false, 00:15:38.834 "abort": true, 00:15:38.834 "seek_hole": false, 00:15:38.834 "seek_data": false, 00:15:38.834 "copy": true, 00:15:38.834 "nvme_iov_md": false 00:15:38.834 }, 00:15:38.834 "memory_domains": [ 00:15:38.834 { 00:15:38.834 "dma_device_id": "system", 00:15:38.834 "dma_device_type": 1 00:15:38.834 }, 00:15:38.834 { 00:15:38.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.834 "dma_device_type": 2 00:15:38.834 } 00:15:38.834 ], 00:15:38.834 "driver_specific": {} 00:15:38.834 } 00:15:38.834 ] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 BaseBdev4 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 [ 00:15:38.834 { 00:15:38.834 "name": "BaseBdev4", 00:15:38.834 "aliases": [ 00:15:38.834 "11067639-7b7d-4f86-9615-5f049e93da64" 00:15:38.834 ], 00:15:38.834 "product_name": "Malloc disk", 00:15:38.834 "block_size": 512, 00:15:38.834 "num_blocks": 65536, 00:15:38.834 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:38.834 "assigned_rate_limits": { 00:15:38.834 "rw_ios_per_sec": 0, 00:15:38.834 "rw_mbytes_per_sec": 0, 00:15:38.834 "r_mbytes_per_sec": 0, 00:15:38.834 "w_mbytes_per_sec": 0 00:15:38.834 }, 00:15:38.834 "claimed": false, 00:15:38.834 "zoned": false, 00:15:38.834 "supported_io_types": { 00:15:38.834 "read": true, 00:15:38.834 "write": true, 00:15:38.834 "unmap": true, 00:15:38.834 "flush": true, 00:15:38.834 "reset": true, 00:15:38.834 "nvme_admin": false, 00:15:38.834 "nvme_io": false, 00:15:38.834 "nvme_io_md": false, 00:15:38.834 "write_zeroes": true, 00:15:38.834 "zcopy": true, 00:15:38.834 "get_zone_info": false, 00:15:38.834 "zone_management": false, 00:15:38.834 "zone_append": false, 00:15:38.834 "compare": false, 00:15:38.834 "compare_and_write": false, 00:15:38.834 "abort": true, 00:15:38.834 "seek_hole": false, 00:15:38.834 "seek_data": false, 00:15:38.834 "copy": true, 00:15:38.834 "nvme_iov_md": false 00:15:38.834 }, 00:15:38.834 "memory_domains": [ 00:15:38.834 { 00:15:38.834 "dma_device_id": "system", 00:15:38.834 "dma_device_type": 1 00:15:38.834 }, 00:15:38.834 { 00:15:38.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.834 "dma_device_type": 2 00:15:38.834 } 00:15:38.834 ], 00:15:38.834 "driver_specific": {} 00:15:38.834 } 00:15:38.834 ] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 [2024-11-27 14:15:09.175791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.834 [2024-11-27 14:15:09.176176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.834 [2024-11-27 14:15:09.176321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.834 [2024-11-27 14:15:09.179166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.834 [2024-11-27 14:15:09.179358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.834 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.834 "name": "Existed_Raid", 00:15:38.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.834 "strip_size_kb": 0, 00:15:38.834 "state": "configuring", 00:15:38.834 "raid_level": "raid1", 00:15:38.834 "superblock": false, 00:15:38.834 "num_base_bdevs": 4, 00:15:38.834 "num_base_bdevs_discovered": 3, 00:15:38.834 "num_base_bdevs_operational": 4, 00:15:38.834 "base_bdevs_list": [ 00:15:38.834 { 00:15:38.834 "name": "BaseBdev1", 00:15:38.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.834 "is_configured": false, 00:15:38.834 "data_offset": 0, 00:15:38.834 "data_size": 0 00:15:38.834 }, 00:15:38.834 { 00:15:38.834 "name": "BaseBdev2", 00:15:38.834 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:38.834 "is_configured": true, 00:15:38.834 "data_offset": 0, 00:15:38.834 "data_size": 65536 00:15:38.834 }, 00:15:38.834 { 00:15:38.834 "name": "BaseBdev3", 00:15:38.834 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:38.834 "is_configured": true, 00:15:38.834 "data_offset": 0, 00:15:38.834 "data_size": 65536 00:15:38.834 }, 00:15:38.834 { 00:15:38.834 "name": "BaseBdev4", 00:15:38.834 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:38.834 "is_configured": true, 00:15:38.834 "data_offset": 0, 00:15:38.834 "data_size": 65536 00:15:38.834 } 00:15:38.834 ] 00:15:38.834 }' 00:15:38.835 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.835 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.402 [2024-11-27 14:15:09.704006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.402 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.403 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.403 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.403 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.403 "name": "Existed_Raid", 00:15:39.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.403 "strip_size_kb": 0, 00:15:39.403 "state": "configuring", 00:15:39.403 "raid_level": "raid1", 00:15:39.403 "superblock": false, 00:15:39.403 "num_base_bdevs": 4, 00:15:39.403 "num_base_bdevs_discovered": 2, 00:15:39.403 "num_base_bdevs_operational": 4, 00:15:39.403 "base_bdevs_list": [ 00:15:39.403 { 00:15:39.403 "name": "BaseBdev1", 00:15:39.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.403 "is_configured": false, 00:15:39.403 "data_offset": 0, 00:15:39.403 "data_size": 0 00:15:39.403 }, 00:15:39.403 { 00:15:39.403 "name": null, 00:15:39.403 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:39.403 "is_configured": false, 00:15:39.403 "data_offset": 0, 00:15:39.403 "data_size": 65536 00:15:39.403 }, 00:15:39.403 { 00:15:39.403 "name": "BaseBdev3", 00:15:39.403 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:39.403 "is_configured": true, 00:15:39.403 "data_offset": 0, 00:15:39.403 "data_size": 65536 00:15:39.403 }, 00:15:39.403 { 00:15:39.403 "name": "BaseBdev4", 00:15:39.403 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:39.403 "is_configured": true, 00:15:39.403 "data_offset": 0, 00:15:39.403 "data_size": 65536 00:15:39.403 } 00:15:39.403 ] 00:15:39.403 }' 00:15:39.403 14:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.403 14:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.969 [2024-11-27 14:15:10.321697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.969 BaseBdev1 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.969 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.969 [ 00:15:39.969 { 00:15:39.969 "name": "BaseBdev1", 00:15:39.969 "aliases": [ 00:15:39.969 "92cb2f5e-6332-4a21-8a98-7bb9df04e426" 00:15:39.969 ], 00:15:39.969 "product_name": "Malloc disk", 00:15:39.969 "block_size": 512, 00:15:39.969 "num_blocks": 65536, 00:15:39.969 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:39.969 "assigned_rate_limits": { 00:15:39.969 "rw_ios_per_sec": 0, 00:15:39.969 "rw_mbytes_per_sec": 0, 00:15:39.969 "r_mbytes_per_sec": 0, 00:15:39.969 "w_mbytes_per_sec": 0 00:15:39.969 }, 00:15:39.969 "claimed": true, 00:15:39.969 "claim_type": "exclusive_write", 00:15:39.969 "zoned": false, 00:15:39.969 "supported_io_types": { 00:15:39.969 "read": true, 00:15:39.969 "write": true, 00:15:39.969 "unmap": true, 00:15:39.969 "flush": true, 00:15:39.969 "reset": true, 00:15:39.969 "nvme_admin": false, 00:15:39.969 "nvme_io": false, 00:15:39.969 "nvme_io_md": false, 00:15:39.969 "write_zeroes": true, 00:15:39.969 "zcopy": true, 00:15:39.969 "get_zone_info": false, 00:15:39.969 "zone_management": false, 00:15:39.969 "zone_append": false, 00:15:39.969 "compare": false, 00:15:39.969 "compare_and_write": false, 00:15:39.970 "abort": true, 00:15:39.970 "seek_hole": false, 00:15:39.970 "seek_data": false, 00:15:39.970 "copy": true, 00:15:39.970 "nvme_iov_md": false 00:15:39.970 }, 00:15:39.970 "memory_domains": [ 00:15:39.970 { 00:15:39.970 "dma_device_id": "system", 00:15:39.970 "dma_device_type": 1 00:15:39.970 }, 00:15:39.970 { 00:15:39.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.970 "dma_device_type": 2 00:15:39.970 } 00:15:39.970 ], 00:15:39.970 "driver_specific": {} 00:15:39.970 } 00:15:39.970 ] 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.970 "name": "Existed_Raid", 00:15:39.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.970 "strip_size_kb": 0, 00:15:39.970 "state": "configuring", 00:15:39.970 "raid_level": "raid1", 00:15:39.970 "superblock": false, 00:15:39.970 "num_base_bdevs": 4, 00:15:39.970 "num_base_bdevs_discovered": 3, 00:15:39.970 "num_base_bdevs_operational": 4, 00:15:39.970 "base_bdevs_list": [ 00:15:39.970 { 00:15:39.970 "name": "BaseBdev1", 00:15:39.970 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:39.970 "is_configured": true, 00:15:39.970 "data_offset": 0, 00:15:39.970 "data_size": 65536 00:15:39.970 }, 00:15:39.970 { 00:15:39.970 "name": null, 00:15:39.970 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:39.970 "is_configured": false, 00:15:39.970 "data_offset": 0, 00:15:39.970 "data_size": 65536 00:15:39.970 }, 00:15:39.970 { 00:15:39.970 "name": "BaseBdev3", 00:15:39.970 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:39.970 "is_configured": true, 00:15:39.970 "data_offset": 0, 00:15:39.970 "data_size": 65536 00:15:39.970 }, 00:15:39.970 { 00:15:39.970 "name": "BaseBdev4", 00:15:39.970 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:39.970 "is_configured": true, 00:15:39.970 "data_offset": 0, 00:15:39.970 "data_size": 65536 00:15:39.970 } 00:15:39.970 ] 00:15:39.970 }' 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.970 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.535 [2024-11-27 14:15:10.914055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.535 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.536 "name": "Existed_Raid", 00:15:40.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.536 "strip_size_kb": 0, 00:15:40.536 "state": "configuring", 00:15:40.536 "raid_level": "raid1", 00:15:40.536 "superblock": false, 00:15:40.536 "num_base_bdevs": 4, 00:15:40.536 "num_base_bdevs_discovered": 2, 00:15:40.536 "num_base_bdevs_operational": 4, 00:15:40.536 "base_bdevs_list": [ 00:15:40.536 { 00:15:40.536 "name": "BaseBdev1", 00:15:40.536 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:40.536 "is_configured": true, 00:15:40.536 "data_offset": 0, 00:15:40.536 "data_size": 65536 00:15:40.536 }, 00:15:40.536 { 00:15:40.536 "name": null, 00:15:40.536 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:40.536 "is_configured": false, 00:15:40.536 "data_offset": 0, 00:15:40.536 "data_size": 65536 00:15:40.536 }, 00:15:40.536 { 00:15:40.536 "name": null, 00:15:40.536 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:40.536 "is_configured": false, 00:15:40.536 "data_offset": 0, 00:15:40.536 "data_size": 65536 00:15:40.536 }, 00:15:40.536 { 00:15:40.536 "name": "BaseBdev4", 00:15:40.536 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:40.536 "is_configured": true, 00:15:40.536 "data_offset": 0, 00:15:40.536 "data_size": 65536 00:15:40.536 } 00:15:40.536 ] 00:15:40.536 }' 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.536 14:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.103 [2024-11-27 14:15:11.490252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.103 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.103 "name": "Existed_Raid", 00:15:41.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.103 "strip_size_kb": 0, 00:15:41.103 "state": "configuring", 00:15:41.103 "raid_level": "raid1", 00:15:41.103 "superblock": false, 00:15:41.103 "num_base_bdevs": 4, 00:15:41.103 "num_base_bdevs_discovered": 3, 00:15:41.103 "num_base_bdevs_operational": 4, 00:15:41.103 "base_bdevs_list": [ 00:15:41.103 { 00:15:41.103 "name": "BaseBdev1", 00:15:41.103 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:41.103 "is_configured": true, 00:15:41.103 "data_offset": 0, 00:15:41.103 "data_size": 65536 00:15:41.103 }, 00:15:41.103 { 00:15:41.103 "name": null, 00:15:41.103 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:41.103 "is_configured": false, 00:15:41.103 "data_offset": 0, 00:15:41.103 "data_size": 65536 00:15:41.103 }, 00:15:41.103 { 00:15:41.103 "name": "BaseBdev3", 00:15:41.103 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:41.103 "is_configured": true, 00:15:41.103 "data_offset": 0, 00:15:41.103 "data_size": 65536 00:15:41.103 }, 00:15:41.103 { 00:15:41.103 "name": "BaseBdev4", 00:15:41.103 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:41.103 "is_configured": true, 00:15:41.104 "data_offset": 0, 00:15:41.104 "data_size": 65536 00:15:41.104 } 00:15:41.104 ] 00:15:41.104 }' 00:15:41.104 14:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.104 14:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 [2024-11-27 14:15:12.070548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.668 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.926 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.926 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.926 "name": "Existed_Raid", 00:15:41.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.926 "strip_size_kb": 0, 00:15:41.926 "state": "configuring", 00:15:41.926 "raid_level": "raid1", 00:15:41.926 "superblock": false, 00:15:41.926 "num_base_bdevs": 4, 00:15:41.926 "num_base_bdevs_discovered": 2, 00:15:41.926 "num_base_bdevs_operational": 4, 00:15:41.926 "base_bdevs_list": [ 00:15:41.926 { 00:15:41.926 "name": null, 00:15:41.926 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:41.926 "is_configured": false, 00:15:41.926 "data_offset": 0, 00:15:41.926 "data_size": 65536 00:15:41.926 }, 00:15:41.926 { 00:15:41.926 "name": null, 00:15:41.926 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:41.926 "is_configured": false, 00:15:41.926 "data_offset": 0, 00:15:41.926 "data_size": 65536 00:15:41.926 }, 00:15:41.926 { 00:15:41.926 "name": "BaseBdev3", 00:15:41.926 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:41.926 "is_configured": true, 00:15:41.926 "data_offset": 0, 00:15:41.926 "data_size": 65536 00:15:41.926 }, 00:15:41.926 { 00:15:41.926 "name": "BaseBdev4", 00:15:41.926 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:41.926 "is_configured": true, 00:15:41.926 "data_offset": 0, 00:15:41.926 "data_size": 65536 00:15:41.926 } 00:15:41.926 ] 00:15:41.926 }' 00:15:41.926 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.926 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.184 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.184 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.184 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.184 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.442 [2024-11-27 14:15:12.736947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.442 "name": "Existed_Raid", 00:15:42.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.442 "strip_size_kb": 0, 00:15:42.442 "state": "configuring", 00:15:42.442 "raid_level": "raid1", 00:15:42.442 "superblock": false, 00:15:42.442 "num_base_bdevs": 4, 00:15:42.442 "num_base_bdevs_discovered": 3, 00:15:42.442 "num_base_bdevs_operational": 4, 00:15:42.442 "base_bdevs_list": [ 00:15:42.442 { 00:15:42.442 "name": null, 00:15:42.442 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:42.442 "is_configured": false, 00:15:42.442 "data_offset": 0, 00:15:42.442 "data_size": 65536 00:15:42.442 }, 00:15:42.442 { 00:15:42.442 "name": "BaseBdev2", 00:15:42.442 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:42.442 "is_configured": true, 00:15:42.442 "data_offset": 0, 00:15:42.442 "data_size": 65536 00:15:42.442 }, 00:15:42.442 { 00:15:42.442 "name": "BaseBdev3", 00:15:42.442 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:42.442 "is_configured": true, 00:15:42.442 "data_offset": 0, 00:15:42.442 "data_size": 65536 00:15:42.442 }, 00:15:42.442 { 00:15:42.442 "name": "BaseBdev4", 00:15:42.442 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:42.442 "is_configured": true, 00:15:42.442 "data_offset": 0, 00:15:42.442 "data_size": 65536 00:15:42.442 } 00:15:42.442 ] 00:15:42.442 }' 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.442 14:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 92cb2f5e-6332-4a21-8a98-7bb9df04e426 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.008 [2024-11-27 14:15:13.390514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:43.008 [2024-11-27 14:15:13.390956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:43.008 [2024-11-27 14:15:13.391017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:43.008 NewBaseBdev 00:15:43.008 [2024-11-27 14:15:13.391475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:43.008 [2024-11-27 14:15:13.391719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:43.008 [2024-11-27 14:15:13.391735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:43.008 [2024-11-27 14:15:13.392071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:43.008 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.009 [ 00:15:43.009 { 00:15:43.009 "name": "NewBaseBdev", 00:15:43.009 "aliases": [ 00:15:43.009 "92cb2f5e-6332-4a21-8a98-7bb9df04e426" 00:15:43.009 ], 00:15:43.009 "product_name": "Malloc disk", 00:15:43.009 "block_size": 512, 00:15:43.009 "num_blocks": 65536, 00:15:43.009 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:43.009 "assigned_rate_limits": { 00:15:43.009 "rw_ios_per_sec": 0, 00:15:43.009 "rw_mbytes_per_sec": 0, 00:15:43.009 "r_mbytes_per_sec": 0, 00:15:43.009 "w_mbytes_per_sec": 0 00:15:43.009 }, 00:15:43.009 "claimed": true, 00:15:43.009 "claim_type": "exclusive_write", 00:15:43.009 "zoned": false, 00:15:43.009 "supported_io_types": { 00:15:43.009 "read": true, 00:15:43.009 "write": true, 00:15:43.009 "unmap": true, 00:15:43.009 "flush": true, 00:15:43.009 "reset": true, 00:15:43.009 "nvme_admin": false, 00:15:43.009 "nvme_io": false, 00:15:43.009 "nvme_io_md": false, 00:15:43.009 "write_zeroes": true, 00:15:43.009 "zcopy": true, 00:15:43.009 "get_zone_info": false, 00:15:43.009 "zone_management": false, 00:15:43.009 "zone_append": false, 00:15:43.009 "compare": false, 00:15:43.009 "compare_and_write": false, 00:15:43.009 "abort": true, 00:15:43.009 "seek_hole": false, 00:15:43.009 "seek_data": false, 00:15:43.009 "copy": true, 00:15:43.009 "nvme_iov_md": false 00:15:43.009 }, 00:15:43.009 "memory_domains": [ 00:15:43.009 { 00:15:43.009 "dma_device_id": "system", 00:15:43.009 "dma_device_type": 1 00:15:43.009 }, 00:15:43.009 { 00:15:43.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.009 "dma_device_type": 2 00:15:43.009 } 00:15:43.009 ], 00:15:43.009 "driver_specific": {} 00:15:43.009 } 00:15:43.009 ] 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.009 "name": "Existed_Raid", 00:15:43.009 "uuid": "89d48ce1-e411-40ba-9a08-da4a32660efd", 00:15:43.009 "strip_size_kb": 0, 00:15:43.009 "state": "online", 00:15:43.009 "raid_level": "raid1", 00:15:43.009 "superblock": false, 00:15:43.009 "num_base_bdevs": 4, 00:15:43.009 "num_base_bdevs_discovered": 4, 00:15:43.009 "num_base_bdevs_operational": 4, 00:15:43.009 "base_bdevs_list": [ 00:15:43.009 { 00:15:43.009 "name": "NewBaseBdev", 00:15:43.009 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:43.009 "is_configured": true, 00:15:43.009 "data_offset": 0, 00:15:43.009 "data_size": 65536 00:15:43.009 }, 00:15:43.009 { 00:15:43.009 "name": "BaseBdev2", 00:15:43.009 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:43.009 "is_configured": true, 00:15:43.009 "data_offset": 0, 00:15:43.009 "data_size": 65536 00:15:43.009 }, 00:15:43.009 { 00:15:43.009 "name": "BaseBdev3", 00:15:43.009 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:43.009 "is_configured": true, 00:15:43.009 "data_offset": 0, 00:15:43.009 "data_size": 65536 00:15:43.009 }, 00:15:43.009 { 00:15:43.009 "name": "BaseBdev4", 00:15:43.009 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:43.009 "is_configured": true, 00:15:43.009 "data_offset": 0, 00:15:43.009 "data_size": 65536 00:15:43.009 } 00:15:43.009 ] 00:15:43.009 }' 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.009 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.575 [2024-11-27 14:15:13.935404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.575 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.575 "name": "Existed_Raid", 00:15:43.575 "aliases": [ 00:15:43.575 "89d48ce1-e411-40ba-9a08-da4a32660efd" 00:15:43.575 ], 00:15:43.575 "product_name": "Raid Volume", 00:15:43.575 "block_size": 512, 00:15:43.575 "num_blocks": 65536, 00:15:43.575 "uuid": "89d48ce1-e411-40ba-9a08-da4a32660efd", 00:15:43.575 "assigned_rate_limits": { 00:15:43.575 "rw_ios_per_sec": 0, 00:15:43.575 "rw_mbytes_per_sec": 0, 00:15:43.575 "r_mbytes_per_sec": 0, 00:15:43.575 "w_mbytes_per_sec": 0 00:15:43.575 }, 00:15:43.575 "claimed": false, 00:15:43.575 "zoned": false, 00:15:43.575 "supported_io_types": { 00:15:43.575 "read": true, 00:15:43.575 "write": true, 00:15:43.575 "unmap": false, 00:15:43.575 "flush": false, 00:15:43.575 "reset": true, 00:15:43.575 "nvme_admin": false, 00:15:43.575 "nvme_io": false, 00:15:43.575 "nvme_io_md": false, 00:15:43.575 "write_zeroes": true, 00:15:43.575 "zcopy": false, 00:15:43.575 "get_zone_info": false, 00:15:43.575 "zone_management": false, 00:15:43.575 "zone_append": false, 00:15:43.575 "compare": false, 00:15:43.575 "compare_and_write": false, 00:15:43.575 "abort": false, 00:15:43.575 "seek_hole": false, 00:15:43.575 "seek_data": false, 00:15:43.575 "copy": false, 00:15:43.575 "nvme_iov_md": false 00:15:43.575 }, 00:15:43.576 "memory_domains": [ 00:15:43.576 { 00:15:43.576 "dma_device_id": "system", 00:15:43.576 "dma_device_type": 1 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.576 "dma_device_type": 2 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "system", 00:15:43.576 "dma_device_type": 1 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.576 "dma_device_type": 2 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "system", 00:15:43.576 "dma_device_type": 1 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.576 "dma_device_type": 2 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "system", 00:15:43.576 "dma_device_type": 1 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.576 "dma_device_type": 2 00:15:43.576 } 00:15:43.576 ], 00:15:43.576 "driver_specific": { 00:15:43.576 "raid": { 00:15:43.576 "uuid": "89d48ce1-e411-40ba-9a08-da4a32660efd", 00:15:43.576 "strip_size_kb": 0, 00:15:43.576 "state": "online", 00:15:43.576 "raid_level": "raid1", 00:15:43.576 "superblock": false, 00:15:43.576 "num_base_bdevs": 4, 00:15:43.576 "num_base_bdevs_discovered": 4, 00:15:43.576 "num_base_bdevs_operational": 4, 00:15:43.576 "base_bdevs_list": [ 00:15:43.576 { 00:15:43.576 "name": "NewBaseBdev", 00:15:43.576 "uuid": "92cb2f5e-6332-4a21-8a98-7bb9df04e426", 00:15:43.576 "is_configured": true, 00:15:43.576 "data_offset": 0, 00:15:43.576 "data_size": 65536 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "name": "BaseBdev2", 00:15:43.576 "uuid": "7d47f0ba-81c1-4514-809e-6eee106cad99", 00:15:43.576 "is_configured": true, 00:15:43.576 "data_offset": 0, 00:15:43.576 "data_size": 65536 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "name": "BaseBdev3", 00:15:43.576 "uuid": "866aad7d-f3ef-4f9e-9bcd-3fadea80a09f", 00:15:43.576 "is_configured": true, 00:15:43.576 "data_offset": 0, 00:15:43.576 "data_size": 65536 00:15:43.576 }, 00:15:43.576 { 00:15:43.576 "name": "BaseBdev4", 00:15:43.576 "uuid": "11067639-7b7d-4f86-9615-5f049e93da64", 00:15:43.576 "is_configured": true, 00:15:43.576 "data_offset": 0, 00:15:43.576 "data_size": 65536 00:15:43.576 } 00:15:43.576 ] 00:15:43.576 } 00:15:43.576 } 00:15:43.576 }' 00:15:43.576 14:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.576 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:43.576 BaseBdev2 00:15:43.576 BaseBdev3 00:15:43.576 BaseBdev4' 00:15:43.576 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.834 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.835 [2024-11-27 14:15:14.318996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.835 [2024-11-27 14:15:14.319340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.835 [2024-11-27 14:15:14.319614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.835 [2024-11-27 14:15:14.320095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.835 [2024-11-27 14:15:14.320119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73506 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73506 ']' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73506 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.835 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73506 00:15:44.093 killing process with pid 73506 00:15:44.093 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.093 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.093 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73506' 00:15:44.093 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73506 00:15:44.093 [2024-11-27 14:15:14.356107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.093 14:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73506 00:15:44.351 [2024-11-27 14:15:14.736858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.725 ************************************ 00:15:45.725 END TEST raid_state_function_test 00:15:45.725 ************************************ 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:45.725 00:15:45.725 real 0m12.843s 00:15:45.725 user 0m21.052s 00:15:45.725 sys 0m1.867s 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.725 14:15:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:45.725 14:15:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:45.725 14:15:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.725 14:15:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.725 ************************************ 00:15:45.725 START TEST raid_state_function_test_sb 00:15:45.725 ************************************ 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:45.725 Process raid pid: 74188 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74188 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74188' 00:15:45.725 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74188 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74188 ']' 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.726 14:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.726 [2024-11-27 14:15:16.101744] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:45.726 [2024-11-27 14:15:16.102247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.984 [2024-11-27 14:15:16.291148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.984 [2024-11-27 14:15:16.462139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.242 [2024-11-27 14:15:16.691609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.242 [2024-11-27 14:15:16.691675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.808 [2024-11-27 14:15:17.042040] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.808 [2024-11-27 14:15:17.042247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.808 [2024-11-27 14:15:17.042526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.808 [2024-11-27 14:15:17.042598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.808 [2024-11-27 14:15:17.042640] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.808 [2024-11-27 14:15:17.042777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.808 [2024-11-27 14:15:17.042857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.808 [2024-11-27 14:15:17.043001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.808 "name": "Existed_Raid", 00:15:46.808 "uuid": "415a81be-7e89-4a36-9228-38032803216b", 00:15:46.808 "strip_size_kb": 0, 00:15:46.808 "state": "configuring", 00:15:46.808 "raid_level": "raid1", 00:15:46.808 "superblock": true, 00:15:46.808 "num_base_bdevs": 4, 00:15:46.808 "num_base_bdevs_discovered": 0, 00:15:46.808 "num_base_bdevs_operational": 4, 00:15:46.808 "base_bdevs_list": [ 00:15:46.808 { 00:15:46.808 "name": "BaseBdev1", 00:15:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.808 "is_configured": false, 00:15:46.808 "data_offset": 0, 00:15:46.808 "data_size": 0 00:15:46.808 }, 00:15:46.808 { 00:15:46.808 "name": "BaseBdev2", 00:15:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.808 "is_configured": false, 00:15:46.808 "data_offset": 0, 00:15:46.808 "data_size": 0 00:15:46.808 }, 00:15:46.808 { 00:15:46.808 "name": "BaseBdev3", 00:15:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.808 "is_configured": false, 00:15:46.808 "data_offset": 0, 00:15:46.808 "data_size": 0 00:15:46.808 }, 00:15:46.808 { 00:15:46.808 "name": "BaseBdev4", 00:15:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.808 "is_configured": false, 00:15:46.808 "data_offset": 0, 00:15:46.808 "data_size": 0 00:15:46.808 } 00:15:46.808 ] 00:15:46.808 }' 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.808 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.067 [2024-11-27 14:15:17.562200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.067 [2024-11-27 14:15:17.562619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.067 [2024-11-27 14:15:17.570161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.067 [2024-11-27 14:15:17.570397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.067 [2024-11-27 14:15:17.570532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.067 [2024-11-27 14:15:17.570579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.067 [2024-11-27 14:15:17.570591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.067 [2024-11-27 14:15:17.570607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.067 [2024-11-27 14:15:17.570616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:47.067 [2024-11-27 14:15:17.570631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.067 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.326 [2024-11-27 14:15:17.619725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.326 BaseBdev1 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.326 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.326 [ 00:15:47.326 { 00:15:47.326 "name": "BaseBdev1", 00:15:47.326 "aliases": [ 00:15:47.326 "103124ea-eca5-4458-ba6b-14605f8b5335" 00:15:47.326 ], 00:15:47.326 "product_name": "Malloc disk", 00:15:47.326 "block_size": 512, 00:15:47.326 "num_blocks": 65536, 00:15:47.326 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:47.327 "assigned_rate_limits": { 00:15:47.327 "rw_ios_per_sec": 0, 00:15:47.327 "rw_mbytes_per_sec": 0, 00:15:47.327 "r_mbytes_per_sec": 0, 00:15:47.327 "w_mbytes_per_sec": 0 00:15:47.327 }, 00:15:47.327 "claimed": true, 00:15:47.327 "claim_type": "exclusive_write", 00:15:47.327 "zoned": false, 00:15:47.327 "supported_io_types": { 00:15:47.327 "read": true, 00:15:47.327 "write": true, 00:15:47.327 "unmap": true, 00:15:47.327 "flush": true, 00:15:47.327 "reset": true, 00:15:47.327 "nvme_admin": false, 00:15:47.327 "nvme_io": false, 00:15:47.327 "nvme_io_md": false, 00:15:47.327 "write_zeroes": true, 00:15:47.327 "zcopy": true, 00:15:47.327 "get_zone_info": false, 00:15:47.327 "zone_management": false, 00:15:47.327 "zone_append": false, 00:15:47.327 "compare": false, 00:15:47.327 "compare_and_write": false, 00:15:47.327 "abort": true, 00:15:47.327 "seek_hole": false, 00:15:47.327 "seek_data": false, 00:15:47.327 "copy": true, 00:15:47.327 "nvme_iov_md": false 00:15:47.327 }, 00:15:47.327 "memory_domains": [ 00:15:47.327 { 00:15:47.327 "dma_device_id": "system", 00:15:47.327 "dma_device_type": 1 00:15:47.327 }, 00:15:47.327 { 00:15:47.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.327 "dma_device_type": 2 00:15:47.327 } 00:15:47.327 ], 00:15:47.327 "driver_specific": {} 00:15:47.327 } 00:15:47.327 ] 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.327 "name": "Existed_Raid", 00:15:47.327 "uuid": "93759447-5b0c-4d71-b800-d1146de02d62", 00:15:47.327 "strip_size_kb": 0, 00:15:47.327 "state": "configuring", 00:15:47.327 "raid_level": "raid1", 00:15:47.327 "superblock": true, 00:15:47.327 "num_base_bdevs": 4, 00:15:47.327 "num_base_bdevs_discovered": 1, 00:15:47.327 "num_base_bdevs_operational": 4, 00:15:47.327 "base_bdevs_list": [ 00:15:47.327 { 00:15:47.327 "name": "BaseBdev1", 00:15:47.327 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:47.327 "is_configured": true, 00:15:47.327 "data_offset": 2048, 00:15:47.327 "data_size": 63488 00:15:47.327 }, 00:15:47.327 { 00:15:47.327 "name": "BaseBdev2", 00:15:47.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.327 "is_configured": false, 00:15:47.327 "data_offset": 0, 00:15:47.327 "data_size": 0 00:15:47.327 }, 00:15:47.327 { 00:15:47.327 "name": "BaseBdev3", 00:15:47.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.327 "is_configured": false, 00:15:47.327 "data_offset": 0, 00:15:47.327 "data_size": 0 00:15:47.327 }, 00:15:47.327 { 00:15:47.327 "name": "BaseBdev4", 00:15:47.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.327 "is_configured": false, 00:15:47.327 "data_offset": 0, 00:15:47.327 "data_size": 0 00:15:47.327 } 00:15:47.327 ] 00:15:47.327 }' 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.327 14:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.894 [2024-11-27 14:15:18.108028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.894 [2024-11-27 14:15:18.108132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.894 [2024-11-27 14:15:18.116009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.894 [2024-11-27 14:15:18.119009] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.894 [2024-11-27 14:15:18.119224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.894 [2024-11-27 14:15:18.119355] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.894 [2024-11-27 14:15:18.119414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.894 [2024-11-27 14:15:18.119629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:47.894 [2024-11-27 14:15:18.119660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.894 "name": "Existed_Raid", 00:15:47.894 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:47.894 "strip_size_kb": 0, 00:15:47.894 "state": "configuring", 00:15:47.894 "raid_level": "raid1", 00:15:47.894 "superblock": true, 00:15:47.894 "num_base_bdevs": 4, 00:15:47.894 "num_base_bdevs_discovered": 1, 00:15:47.894 "num_base_bdevs_operational": 4, 00:15:47.894 "base_bdevs_list": [ 00:15:47.894 { 00:15:47.894 "name": "BaseBdev1", 00:15:47.894 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:47.894 "is_configured": true, 00:15:47.894 "data_offset": 2048, 00:15:47.894 "data_size": 63488 00:15:47.894 }, 00:15:47.894 { 00:15:47.894 "name": "BaseBdev2", 00:15:47.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.894 "is_configured": false, 00:15:47.894 "data_offset": 0, 00:15:47.894 "data_size": 0 00:15:47.894 }, 00:15:47.894 { 00:15:47.894 "name": "BaseBdev3", 00:15:47.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.894 "is_configured": false, 00:15:47.894 "data_offset": 0, 00:15:47.894 "data_size": 0 00:15:47.894 }, 00:15:47.894 { 00:15:47.894 "name": "BaseBdev4", 00:15:47.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.894 "is_configured": false, 00:15:47.894 "data_offset": 0, 00:15:47.894 "data_size": 0 00:15:47.894 } 00:15:47.894 ] 00:15:47.894 }' 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.894 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.153 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.153 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.153 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.153 [2024-11-27 14:15:18.663564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.413 BaseBdev2 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.413 [ 00:15:48.413 { 00:15:48.413 "name": "BaseBdev2", 00:15:48.413 "aliases": [ 00:15:48.413 "57e0dc81-544c-49ab-a0bc-b927bff57ac7" 00:15:48.413 ], 00:15:48.413 "product_name": "Malloc disk", 00:15:48.413 "block_size": 512, 00:15:48.413 "num_blocks": 65536, 00:15:48.413 "uuid": "57e0dc81-544c-49ab-a0bc-b927bff57ac7", 00:15:48.413 "assigned_rate_limits": { 00:15:48.413 "rw_ios_per_sec": 0, 00:15:48.413 "rw_mbytes_per_sec": 0, 00:15:48.413 "r_mbytes_per_sec": 0, 00:15:48.413 "w_mbytes_per_sec": 0 00:15:48.413 }, 00:15:48.413 "claimed": true, 00:15:48.413 "claim_type": "exclusive_write", 00:15:48.413 "zoned": false, 00:15:48.413 "supported_io_types": { 00:15:48.413 "read": true, 00:15:48.413 "write": true, 00:15:48.413 "unmap": true, 00:15:48.413 "flush": true, 00:15:48.413 "reset": true, 00:15:48.413 "nvme_admin": false, 00:15:48.413 "nvme_io": false, 00:15:48.413 "nvme_io_md": false, 00:15:48.413 "write_zeroes": true, 00:15:48.413 "zcopy": true, 00:15:48.413 "get_zone_info": false, 00:15:48.413 "zone_management": false, 00:15:48.413 "zone_append": false, 00:15:48.413 "compare": false, 00:15:48.413 "compare_and_write": false, 00:15:48.413 "abort": true, 00:15:48.413 "seek_hole": false, 00:15:48.413 "seek_data": false, 00:15:48.413 "copy": true, 00:15:48.413 "nvme_iov_md": false 00:15:48.413 }, 00:15:48.413 "memory_domains": [ 00:15:48.413 { 00:15:48.413 "dma_device_id": "system", 00:15:48.413 "dma_device_type": 1 00:15:48.413 }, 00:15:48.413 { 00:15:48.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.413 "dma_device_type": 2 00:15:48.413 } 00:15:48.413 ], 00:15:48.413 "driver_specific": {} 00:15:48.413 } 00:15:48.413 ] 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.413 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.413 "name": "Existed_Raid", 00:15:48.413 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:48.413 "strip_size_kb": 0, 00:15:48.413 "state": "configuring", 00:15:48.413 "raid_level": "raid1", 00:15:48.413 "superblock": true, 00:15:48.413 "num_base_bdevs": 4, 00:15:48.413 "num_base_bdevs_discovered": 2, 00:15:48.413 "num_base_bdevs_operational": 4, 00:15:48.413 "base_bdevs_list": [ 00:15:48.413 { 00:15:48.413 "name": "BaseBdev1", 00:15:48.413 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:48.413 "is_configured": true, 00:15:48.413 "data_offset": 2048, 00:15:48.413 "data_size": 63488 00:15:48.413 }, 00:15:48.413 { 00:15:48.413 "name": "BaseBdev2", 00:15:48.413 "uuid": "57e0dc81-544c-49ab-a0bc-b927bff57ac7", 00:15:48.413 "is_configured": true, 00:15:48.413 "data_offset": 2048, 00:15:48.413 "data_size": 63488 00:15:48.413 }, 00:15:48.413 { 00:15:48.413 "name": "BaseBdev3", 00:15:48.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.413 "is_configured": false, 00:15:48.413 "data_offset": 0, 00:15:48.413 "data_size": 0 00:15:48.413 }, 00:15:48.413 { 00:15:48.413 "name": "BaseBdev4", 00:15:48.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.413 "is_configured": false, 00:15:48.413 "data_offset": 0, 00:15:48.413 "data_size": 0 00:15:48.413 } 00:15:48.413 ] 00:15:48.413 }' 00:15:48.414 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.414 14:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.735 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.735 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.736 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.008 BaseBdev3 00:15:49.008 [2024-11-27 14:15:19.285336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.008 [ 00:15:49.008 { 00:15:49.008 "name": "BaseBdev3", 00:15:49.008 "aliases": [ 00:15:49.008 "7ab46ded-5be7-4626-9db7-a2c9a4c0018d" 00:15:49.008 ], 00:15:49.008 "product_name": "Malloc disk", 00:15:49.008 "block_size": 512, 00:15:49.008 "num_blocks": 65536, 00:15:49.008 "uuid": "7ab46ded-5be7-4626-9db7-a2c9a4c0018d", 00:15:49.008 "assigned_rate_limits": { 00:15:49.008 "rw_ios_per_sec": 0, 00:15:49.008 "rw_mbytes_per_sec": 0, 00:15:49.008 "r_mbytes_per_sec": 0, 00:15:49.008 "w_mbytes_per_sec": 0 00:15:49.008 }, 00:15:49.008 "claimed": true, 00:15:49.008 "claim_type": "exclusive_write", 00:15:49.008 "zoned": false, 00:15:49.008 "supported_io_types": { 00:15:49.008 "read": true, 00:15:49.008 "write": true, 00:15:49.008 "unmap": true, 00:15:49.008 "flush": true, 00:15:49.008 "reset": true, 00:15:49.008 "nvme_admin": false, 00:15:49.008 "nvme_io": false, 00:15:49.008 "nvme_io_md": false, 00:15:49.008 "write_zeroes": true, 00:15:49.008 "zcopy": true, 00:15:49.008 "get_zone_info": false, 00:15:49.008 "zone_management": false, 00:15:49.008 "zone_append": false, 00:15:49.008 "compare": false, 00:15:49.008 "compare_and_write": false, 00:15:49.008 "abort": true, 00:15:49.008 "seek_hole": false, 00:15:49.008 "seek_data": false, 00:15:49.008 "copy": true, 00:15:49.008 "nvme_iov_md": false 00:15:49.008 }, 00:15:49.008 "memory_domains": [ 00:15:49.008 { 00:15:49.008 "dma_device_id": "system", 00:15:49.008 "dma_device_type": 1 00:15:49.008 }, 00:15:49.008 { 00:15:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.008 "dma_device_type": 2 00:15:49.008 } 00:15:49.008 ], 00:15:49.008 "driver_specific": {} 00:15:49.008 } 00:15:49.008 ] 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.008 "name": "Existed_Raid", 00:15:49.008 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:49.008 "strip_size_kb": 0, 00:15:49.008 "state": "configuring", 00:15:49.008 "raid_level": "raid1", 00:15:49.008 "superblock": true, 00:15:49.008 "num_base_bdevs": 4, 00:15:49.008 "num_base_bdevs_discovered": 3, 00:15:49.008 "num_base_bdevs_operational": 4, 00:15:49.008 "base_bdevs_list": [ 00:15:49.008 { 00:15:49.008 "name": "BaseBdev1", 00:15:49.008 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:49.008 "is_configured": true, 00:15:49.008 "data_offset": 2048, 00:15:49.008 "data_size": 63488 00:15:49.008 }, 00:15:49.008 { 00:15:49.008 "name": "BaseBdev2", 00:15:49.008 "uuid": "57e0dc81-544c-49ab-a0bc-b927bff57ac7", 00:15:49.008 "is_configured": true, 00:15:49.008 "data_offset": 2048, 00:15:49.008 "data_size": 63488 00:15:49.008 }, 00:15:49.008 { 00:15:49.008 "name": "BaseBdev3", 00:15:49.008 "uuid": "7ab46ded-5be7-4626-9db7-a2c9a4c0018d", 00:15:49.008 "is_configured": true, 00:15:49.008 "data_offset": 2048, 00:15:49.008 "data_size": 63488 00:15:49.008 }, 00:15:49.008 { 00:15:49.008 "name": "BaseBdev4", 00:15:49.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.008 "is_configured": false, 00:15:49.008 "data_offset": 0, 00:15:49.008 "data_size": 0 00:15:49.008 } 00:15:49.008 ] 00:15:49.008 }' 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.008 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.576 [2024-11-27 14:15:19.880229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.576 [2024-11-27 14:15:19.880949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:49.576 [2024-11-27 14:15:19.881117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.576 BaseBdev4 00:15:49.576 [2024-11-27 14:15:19.881558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.576 [2024-11-27 14:15:19.881769] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:49.576 [2024-11-27 14:15:19.881791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:49.576 [2024-11-27 14:15:19.882013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.576 [ 00:15:49.576 { 00:15:49.576 "name": "BaseBdev4", 00:15:49.576 "aliases": [ 00:15:49.576 "eadb9228-724c-40fd-bb12-b1b027826934" 00:15:49.576 ], 00:15:49.576 "product_name": "Malloc disk", 00:15:49.576 "block_size": 512, 00:15:49.576 "num_blocks": 65536, 00:15:49.576 "uuid": "eadb9228-724c-40fd-bb12-b1b027826934", 00:15:49.576 "assigned_rate_limits": { 00:15:49.576 "rw_ios_per_sec": 0, 00:15:49.576 "rw_mbytes_per_sec": 0, 00:15:49.576 "r_mbytes_per_sec": 0, 00:15:49.576 "w_mbytes_per_sec": 0 00:15:49.576 }, 00:15:49.576 "claimed": true, 00:15:49.576 "claim_type": "exclusive_write", 00:15:49.576 "zoned": false, 00:15:49.576 "supported_io_types": { 00:15:49.576 "read": true, 00:15:49.576 "write": true, 00:15:49.576 "unmap": true, 00:15:49.576 "flush": true, 00:15:49.576 "reset": true, 00:15:49.576 "nvme_admin": false, 00:15:49.576 "nvme_io": false, 00:15:49.576 "nvme_io_md": false, 00:15:49.576 "write_zeroes": true, 00:15:49.576 "zcopy": true, 00:15:49.576 "get_zone_info": false, 00:15:49.576 "zone_management": false, 00:15:49.576 "zone_append": false, 00:15:49.576 "compare": false, 00:15:49.576 "compare_and_write": false, 00:15:49.576 "abort": true, 00:15:49.576 "seek_hole": false, 00:15:49.576 "seek_data": false, 00:15:49.576 "copy": true, 00:15:49.576 "nvme_iov_md": false 00:15:49.576 }, 00:15:49.576 "memory_domains": [ 00:15:49.576 { 00:15:49.576 "dma_device_id": "system", 00:15:49.576 "dma_device_type": 1 00:15:49.576 }, 00:15:49.576 { 00:15:49.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.576 "dma_device_type": 2 00:15:49.576 } 00:15:49.576 ], 00:15:49.576 "driver_specific": {} 00:15:49.576 } 00:15:49.576 ] 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.576 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.577 "name": "Existed_Raid", 00:15:49.577 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:49.577 "strip_size_kb": 0, 00:15:49.577 "state": "online", 00:15:49.577 "raid_level": "raid1", 00:15:49.577 "superblock": true, 00:15:49.577 "num_base_bdevs": 4, 00:15:49.577 "num_base_bdevs_discovered": 4, 00:15:49.577 "num_base_bdevs_operational": 4, 00:15:49.577 "base_bdevs_list": [ 00:15:49.577 { 00:15:49.577 "name": "BaseBdev1", 00:15:49.577 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:49.577 "is_configured": true, 00:15:49.577 "data_offset": 2048, 00:15:49.577 "data_size": 63488 00:15:49.577 }, 00:15:49.577 { 00:15:49.577 "name": "BaseBdev2", 00:15:49.577 "uuid": "57e0dc81-544c-49ab-a0bc-b927bff57ac7", 00:15:49.577 "is_configured": true, 00:15:49.577 "data_offset": 2048, 00:15:49.577 "data_size": 63488 00:15:49.577 }, 00:15:49.577 { 00:15:49.577 "name": "BaseBdev3", 00:15:49.577 "uuid": "7ab46ded-5be7-4626-9db7-a2c9a4c0018d", 00:15:49.577 "is_configured": true, 00:15:49.577 "data_offset": 2048, 00:15:49.577 "data_size": 63488 00:15:49.577 }, 00:15:49.577 { 00:15:49.577 "name": "BaseBdev4", 00:15:49.577 "uuid": "eadb9228-724c-40fd-bb12-b1b027826934", 00:15:49.577 "is_configured": true, 00:15:49.577 "data_offset": 2048, 00:15:49.577 "data_size": 63488 00:15:49.577 } 00:15:49.577 ] 00:15:49.577 }' 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.577 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 [2024-11-27 14:15:20.428989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.144 "name": "Existed_Raid", 00:15:50.144 "aliases": [ 00:15:50.144 "799bd5fd-5918-46bb-a820-b44db65f4b02" 00:15:50.144 ], 00:15:50.144 "product_name": "Raid Volume", 00:15:50.144 "block_size": 512, 00:15:50.144 "num_blocks": 63488, 00:15:50.144 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:50.144 "assigned_rate_limits": { 00:15:50.144 "rw_ios_per_sec": 0, 00:15:50.144 "rw_mbytes_per_sec": 0, 00:15:50.144 "r_mbytes_per_sec": 0, 00:15:50.144 "w_mbytes_per_sec": 0 00:15:50.144 }, 00:15:50.144 "claimed": false, 00:15:50.144 "zoned": false, 00:15:50.144 "supported_io_types": { 00:15:50.144 "read": true, 00:15:50.144 "write": true, 00:15:50.144 "unmap": false, 00:15:50.144 "flush": false, 00:15:50.144 "reset": true, 00:15:50.144 "nvme_admin": false, 00:15:50.144 "nvme_io": false, 00:15:50.144 "nvme_io_md": false, 00:15:50.144 "write_zeroes": true, 00:15:50.144 "zcopy": false, 00:15:50.144 "get_zone_info": false, 00:15:50.144 "zone_management": false, 00:15:50.144 "zone_append": false, 00:15:50.144 "compare": false, 00:15:50.144 "compare_and_write": false, 00:15:50.144 "abort": false, 00:15:50.144 "seek_hole": false, 00:15:50.144 "seek_data": false, 00:15:50.144 "copy": false, 00:15:50.144 "nvme_iov_md": false 00:15:50.144 }, 00:15:50.144 "memory_domains": [ 00:15:50.144 { 00:15:50.144 "dma_device_id": "system", 00:15:50.144 "dma_device_type": 1 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.144 "dma_device_type": 2 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "system", 00:15:50.144 "dma_device_type": 1 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.144 "dma_device_type": 2 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "system", 00:15:50.144 "dma_device_type": 1 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.144 "dma_device_type": 2 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "system", 00:15:50.144 "dma_device_type": 1 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.144 "dma_device_type": 2 00:15:50.144 } 00:15:50.144 ], 00:15:50.144 "driver_specific": { 00:15:50.144 "raid": { 00:15:50.144 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:50.144 "strip_size_kb": 0, 00:15:50.144 "state": "online", 00:15:50.144 "raid_level": "raid1", 00:15:50.144 "superblock": true, 00:15:50.144 "num_base_bdevs": 4, 00:15:50.144 "num_base_bdevs_discovered": 4, 00:15:50.144 "num_base_bdevs_operational": 4, 00:15:50.144 "base_bdevs_list": [ 00:15:50.144 { 00:15:50.144 "name": "BaseBdev1", 00:15:50.144 "uuid": "103124ea-eca5-4458-ba6b-14605f8b5335", 00:15:50.144 "is_configured": true, 00:15:50.144 "data_offset": 2048, 00:15:50.144 "data_size": 63488 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "name": "BaseBdev2", 00:15:50.144 "uuid": "57e0dc81-544c-49ab-a0bc-b927bff57ac7", 00:15:50.144 "is_configured": true, 00:15:50.144 "data_offset": 2048, 00:15:50.144 "data_size": 63488 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "name": "BaseBdev3", 00:15:50.144 "uuid": "7ab46ded-5be7-4626-9db7-a2c9a4c0018d", 00:15:50.144 "is_configured": true, 00:15:50.144 "data_offset": 2048, 00:15:50.144 "data_size": 63488 00:15:50.144 }, 00:15:50.144 { 00:15:50.144 "name": "BaseBdev4", 00:15:50.144 "uuid": "eadb9228-724c-40fd-bb12-b1b027826934", 00:15:50.144 "is_configured": true, 00:15:50.144 "data_offset": 2048, 00:15:50.144 "data_size": 63488 00:15:50.144 } 00:15:50.144 ] 00:15:50.144 } 00:15:50.144 } 00:15:50.144 }' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:50.144 BaseBdev2 00:15:50.144 BaseBdev3 00:15:50.144 BaseBdev4' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.404 [2024-11-27 14:15:20.764730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.404 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.404 "name": "Existed_Raid", 00:15:50.404 "uuid": "799bd5fd-5918-46bb-a820-b44db65f4b02", 00:15:50.404 "strip_size_kb": 0, 00:15:50.404 "state": "online", 00:15:50.404 "raid_level": "raid1", 00:15:50.404 "superblock": true, 00:15:50.404 "num_base_bdevs": 4, 00:15:50.404 "num_base_bdevs_discovered": 3, 00:15:50.404 "num_base_bdevs_operational": 3, 00:15:50.404 "base_bdevs_list": [ 00:15:50.404 { 00:15:50.404 "name": null, 00:15:50.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.404 "is_configured": false, 00:15:50.404 "data_offset": 0, 00:15:50.404 "data_size": 63488 00:15:50.404 }, 00:15:50.404 { 00:15:50.404 "name": "BaseBdev2", 00:15:50.404 "uuid": "57e0dc81-544c-49ab-a0bc-b927bff57ac7", 00:15:50.404 "is_configured": true, 00:15:50.404 "data_offset": 2048, 00:15:50.404 "data_size": 63488 00:15:50.404 }, 00:15:50.405 { 00:15:50.405 "name": "BaseBdev3", 00:15:50.405 "uuid": "7ab46ded-5be7-4626-9db7-a2c9a4c0018d", 00:15:50.405 "is_configured": true, 00:15:50.405 "data_offset": 2048, 00:15:50.405 "data_size": 63488 00:15:50.405 }, 00:15:50.405 { 00:15:50.405 "name": "BaseBdev4", 00:15:50.405 "uuid": "eadb9228-724c-40fd-bb12-b1b027826934", 00:15:50.405 "is_configured": true, 00:15:50.405 "data_offset": 2048, 00:15:50.405 "data_size": 63488 00:15:50.405 } 00:15:50.405 ] 00:15:50.405 }' 00:15:50.405 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.405 14:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.973 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.973 [2024-11-27 14:15:21.424238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.232 [2024-11-27 14:15:21.579083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.232 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.232 [2024-11-27 14:15:21.735739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:51.232 [2024-11-27 14:15:21.736184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.491 [2024-11-27 14:15:21.828172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.491 [2024-11-27 14:15:21.828258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.491 [2024-11-27 14:15:21.828280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.491 BaseBdev2 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.491 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.491 [ 00:15:51.492 { 00:15:51.492 "name": "BaseBdev2", 00:15:51.492 "aliases": [ 00:15:51.492 "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb" 00:15:51.492 ], 00:15:51.492 "product_name": "Malloc disk", 00:15:51.492 "block_size": 512, 00:15:51.492 "num_blocks": 65536, 00:15:51.492 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:51.492 "assigned_rate_limits": { 00:15:51.492 "rw_ios_per_sec": 0, 00:15:51.492 "rw_mbytes_per_sec": 0, 00:15:51.492 "r_mbytes_per_sec": 0, 00:15:51.492 "w_mbytes_per_sec": 0 00:15:51.492 }, 00:15:51.492 "claimed": false, 00:15:51.492 "zoned": false, 00:15:51.492 "supported_io_types": { 00:15:51.492 "read": true, 00:15:51.492 "write": true, 00:15:51.492 "unmap": true, 00:15:51.492 "flush": true, 00:15:51.492 "reset": true, 00:15:51.492 "nvme_admin": false, 00:15:51.492 "nvme_io": false, 00:15:51.492 "nvme_io_md": false, 00:15:51.492 "write_zeroes": true, 00:15:51.492 "zcopy": true, 00:15:51.492 "get_zone_info": false, 00:15:51.492 "zone_management": false, 00:15:51.492 "zone_append": false, 00:15:51.492 "compare": false, 00:15:51.492 "compare_and_write": false, 00:15:51.492 "abort": true, 00:15:51.492 "seek_hole": false, 00:15:51.492 "seek_data": false, 00:15:51.492 "copy": true, 00:15:51.492 "nvme_iov_md": false 00:15:51.492 }, 00:15:51.492 "memory_domains": [ 00:15:51.492 { 00:15:51.492 "dma_device_id": "system", 00:15:51.492 "dma_device_type": 1 00:15:51.492 }, 00:15:51.492 { 00:15:51.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.492 "dma_device_type": 2 00:15:51.492 } 00:15:51.492 ], 00:15:51.492 "driver_specific": {} 00:15:51.492 } 00:15:51.492 ] 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.492 14:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 BaseBdev3 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 [ 00:15:51.751 { 00:15:51.751 "name": "BaseBdev3", 00:15:51.751 "aliases": [ 00:15:51.751 "2b17b8b5-1279-472e-b32d-36247f6c9142" 00:15:51.751 ], 00:15:51.751 "product_name": "Malloc disk", 00:15:51.751 "block_size": 512, 00:15:51.751 "num_blocks": 65536, 00:15:51.751 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:51.751 "assigned_rate_limits": { 00:15:51.751 "rw_ios_per_sec": 0, 00:15:51.751 "rw_mbytes_per_sec": 0, 00:15:51.751 "r_mbytes_per_sec": 0, 00:15:51.751 "w_mbytes_per_sec": 0 00:15:51.751 }, 00:15:51.751 "claimed": false, 00:15:51.751 "zoned": false, 00:15:51.751 "supported_io_types": { 00:15:51.751 "read": true, 00:15:51.751 "write": true, 00:15:51.751 "unmap": true, 00:15:51.751 "flush": true, 00:15:51.751 "reset": true, 00:15:51.751 "nvme_admin": false, 00:15:51.751 "nvme_io": false, 00:15:51.751 "nvme_io_md": false, 00:15:51.751 "write_zeroes": true, 00:15:51.751 "zcopy": true, 00:15:51.751 "get_zone_info": false, 00:15:51.751 "zone_management": false, 00:15:51.751 "zone_append": false, 00:15:51.751 "compare": false, 00:15:51.751 "compare_and_write": false, 00:15:51.751 "abort": true, 00:15:51.751 "seek_hole": false, 00:15:51.751 "seek_data": false, 00:15:51.751 "copy": true, 00:15:51.751 "nvme_iov_md": false 00:15:51.751 }, 00:15:51.751 "memory_domains": [ 00:15:51.751 { 00:15:51.751 "dma_device_id": "system", 00:15:51.751 "dma_device_type": 1 00:15:51.751 }, 00:15:51.751 { 00:15:51.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.751 "dma_device_type": 2 00:15:51.751 } 00:15:51.751 ], 00:15:51.751 "driver_specific": {} 00:15:51.751 } 00:15:51.751 ] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 BaseBdev4 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 [ 00:15:51.751 { 00:15:51.751 "name": "BaseBdev4", 00:15:51.751 "aliases": [ 00:15:51.751 "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad" 00:15:51.751 ], 00:15:51.751 "product_name": "Malloc disk", 00:15:51.751 "block_size": 512, 00:15:51.751 "num_blocks": 65536, 00:15:51.751 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:51.751 "assigned_rate_limits": { 00:15:51.751 "rw_ios_per_sec": 0, 00:15:51.751 "rw_mbytes_per_sec": 0, 00:15:51.751 "r_mbytes_per_sec": 0, 00:15:51.751 "w_mbytes_per_sec": 0 00:15:51.751 }, 00:15:51.751 "claimed": false, 00:15:51.751 "zoned": false, 00:15:51.751 "supported_io_types": { 00:15:51.751 "read": true, 00:15:51.751 "write": true, 00:15:51.751 "unmap": true, 00:15:51.751 "flush": true, 00:15:51.751 "reset": true, 00:15:51.751 "nvme_admin": false, 00:15:51.751 "nvme_io": false, 00:15:51.751 "nvme_io_md": false, 00:15:51.751 "write_zeroes": true, 00:15:51.751 "zcopy": true, 00:15:51.751 "get_zone_info": false, 00:15:51.751 "zone_management": false, 00:15:51.751 "zone_append": false, 00:15:51.751 "compare": false, 00:15:51.751 "compare_and_write": false, 00:15:51.751 "abort": true, 00:15:51.751 "seek_hole": false, 00:15:51.751 "seek_data": false, 00:15:51.751 "copy": true, 00:15:51.751 "nvme_iov_md": false 00:15:51.751 }, 00:15:51.751 "memory_domains": [ 00:15:51.751 { 00:15:51.751 "dma_device_id": "system", 00:15:51.751 "dma_device_type": 1 00:15:51.751 }, 00:15:51.751 { 00:15:51.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.751 "dma_device_type": 2 00:15:51.751 } 00:15:51.751 ], 00:15:51.751 "driver_specific": {} 00:15:51.751 } 00:15:51.751 ] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 [2024-11-27 14:15:22.116342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.751 [2024-11-27 14:15:22.116692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.751 [2024-11-27 14:15:22.116750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.751 [2024-11-27 14:15:22.119457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.751 [2024-11-27 14:15:22.119519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.752 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.752 "name": "Existed_Raid", 00:15:51.752 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:51.752 "strip_size_kb": 0, 00:15:51.752 "state": "configuring", 00:15:51.752 "raid_level": "raid1", 00:15:51.752 "superblock": true, 00:15:51.752 "num_base_bdevs": 4, 00:15:51.752 "num_base_bdevs_discovered": 3, 00:15:51.752 "num_base_bdevs_operational": 4, 00:15:51.752 "base_bdevs_list": [ 00:15:51.752 { 00:15:51.752 "name": "BaseBdev1", 00:15:51.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.752 "is_configured": false, 00:15:51.752 "data_offset": 0, 00:15:51.752 "data_size": 0 00:15:51.752 }, 00:15:51.752 { 00:15:51.752 "name": "BaseBdev2", 00:15:51.752 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:51.752 "is_configured": true, 00:15:51.752 "data_offset": 2048, 00:15:51.752 "data_size": 63488 00:15:51.752 }, 00:15:51.752 { 00:15:51.752 "name": "BaseBdev3", 00:15:51.752 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:51.752 "is_configured": true, 00:15:51.752 "data_offset": 2048, 00:15:51.752 "data_size": 63488 00:15:51.752 }, 00:15:51.752 { 00:15:51.752 "name": "BaseBdev4", 00:15:51.752 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:51.752 "is_configured": true, 00:15:51.752 "data_offset": 2048, 00:15:51.752 "data_size": 63488 00:15:51.752 } 00:15:51.752 ] 00:15:51.752 }' 00:15:51.752 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.752 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.319 [2024-11-27 14:15:22.668607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.319 "name": "Existed_Raid", 00:15:52.319 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:52.319 "strip_size_kb": 0, 00:15:52.319 "state": "configuring", 00:15:52.319 "raid_level": "raid1", 00:15:52.319 "superblock": true, 00:15:52.319 "num_base_bdevs": 4, 00:15:52.319 "num_base_bdevs_discovered": 2, 00:15:52.319 "num_base_bdevs_operational": 4, 00:15:52.319 "base_bdevs_list": [ 00:15:52.319 { 00:15:52.319 "name": "BaseBdev1", 00:15:52.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.319 "is_configured": false, 00:15:52.319 "data_offset": 0, 00:15:52.319 "data_size": 0 00:15:52.319 }, 00:15:52.319 { 00:15:52.319 "name": null, 00:15:52.319 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:52.319 "is_configured": false, 00:15:52.319 "data_offset": 0, 00:15:52.319 "data_size": 63488 00:15:52.319 }, 00:15:52.319 { 00:15:52.319 "name": "BaseBdev3", 00:15:52.319 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:52.319 "is_configured": true, 00:15:52.319 "data_offset": 2048, 00:15:52.319 "data_size": 63488 00:15:52.319 }, 00:15:52.319 { 00:15:52.319 "name": "BaseBdev4", 00:15:52.319 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:52.319 "is_configured": true, 00:15:52.319 "data_offset": 2048, 00:15:52.319 "data_size": 63488 00:15:52.319 } 00:15:52.319 ] 00:15:52.319 }' 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.319 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.887 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 [2024-11-27 14:15:23.295420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.888 BaseBdev1 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 [ 00:15:52.888 { 00:15:52.888 "name": "BaseBdev1", 00:15:52.888 "aliases": [ 00:15:52.888 "4144d98c-f516-4889-a1df-6de4757304ff" 00:15:52.888 ], 00:15:52.888 "product_name": "Malloc disk", 00:15:52.888 "block_size": 512, 00:15:52.888 "num_blocks": 65536, 00:15:52.888 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:52.888 "assigned_rate_limits": { 00:15:52.888 "rw_ios_per_sec": 0, 00:15:52.888 "rw_mbytes_per_sec": 0, 00:15:52.888 "r_mbytes_per_sec": 0, 00:15:52.888 "w_mbytes_per_sec": 0 00:15:52.888 }, 00:15:52.888 "claimed": true, 00:15:52.888 "claim_type": "exclusive_write", 00:15:52.888 "zoned": false, 00:15:52.888 "supported_io_types": { 00:15:52.888 "read": true, 00:15:52.888 "write": true, 00:15:52.888 "unmap": true, 00:15:52.888 "flush": true, 00:15:52.888 "reset": true, 00:15:52.888 "nvme_admin": false, 00:15:52.888 "nvme_io": false, 00:15:52.888 "nvme_io_md": false, 00:15:52.888 "write_zeroes": true, 00:15:52.888 "zcopy": true, 00:15:52.888 "get_zone_info": false, 00:15:52.888 "zone_management": false, 00:15:52.888 "zone_append": false, 00:15:52.888 "compare": false, 00:15:52.888 "compare_and_write": false, 00:15:52.888 "abort": true, 00:15:52.888 "seek_hole": false, 00:15:52.888 "seek_data": false, 00:15:52.888 "copy": true, 00:15:52.888 "nvme_iov_md": false 00:15:52.888 }, 00:15:52.888 "memory_domains": [ 00:15:52.888 { 00:15:52.888 "dma_device_id": "system", 00:15:52.888 "dma_device_type": 1 00:15:52.888 }, 00:15:52.888 { 00:15:52.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.888 "dma_device_type": 2 00:15:52.888 } 00:15:52.888 ], 00:15:52.888 "driver_specific": {} 00:15:52.888 } 00:15:52.888 ] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.888 "name": "Existed_Raid", 00:15:52.888 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:52.888 "strip_size_kb": 0, 00:15:52.888 "state": "configuring", 00:15:52.888 "raid_level": "raid1", 00:15:52.888 "superblock": true, 00:15:52.888 "num_base_bdevs": 4, 00:15:52.888 "num_base_bdevs_discovered": 3, 00:15:52.888 "num_base_bdevs_operational": 4, 00:15:52.888 "base_bdevs_list": [ 00:15:52.888 { 00:15:52.888 "name": "BaseBdev1", 00:15:52.888 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:52.888 "is_configured": true, 00:15:52.888 "data_offset": 2048, 00:15:52.888 "data_size": 63488 00:15:52.888 }, 00:15:52.888 { 00:15:52.888 "name": null, 00:15:52.888 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:52.888 "is_configured": false, 00:15:52.888 "data_offset": 0, 00:15:52.888 "data_size": 63488 00:15:52.888 }, 00:15:52.888 { 00:15:52.888 "name": "BaseBdev3", 00:15:52.888 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:52.888 "is_configured": true, 00:15:52.888 "data_offset": 2048, 00:15:52.888 "data_size": 63488 00:15:52.888 }, 00:15:52.888 { 00:15:52.888 "name": "BaseBdev4", 00:15:52.888 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:52.888 "is_configured": true, 00:15:52.888 "data_offset": 2048, 00:15:52.888 "data_size": 63488 00:15:52.888 } 00:15:52.888 ] 00:15:52.888 }' 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.888 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.456 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:53.456 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.456 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.457 [2024-11-27 14:15:23.903765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.457 "name": "Existed_Raid", 00:15:53.457 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:53.457 "strip_size_kb": 0, 00:15:53.457 "state": "configuring", 00:15:53.457 "raid_level": "raid1", 00:15:53.457 "superblock": true, 00:15:53.457 "num_base_bdevs": 4, 00:15:53.457 "num_base_bdevs_discovered": 2, 00:15:53.457 "num_base_bdevs_operational": 4, 00:15:53.457 "base_bdevs_list": [ 00:15:53.457 { 00:15:53.457 "name": "BaseBdev1", 00:15:53.457 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:53.457 "is_configured": true, 00:15:53.457 "data_offset": 2048, 00:15:53.457 "data_size": 63488 00:15:53.457 }, 00:15:53.457 { 00:15:53.457 "name": null, 00:15:53.457 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:53.457 "is_configured": false, 00:15:53.457 "data_offset": 0, 00:15:53.457 "data_size": 63488 00:15:53.457 }, 00:15:53.457 { 00:15:53.457 "name": null, 00:15:53.457 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:53.457 "is_configured": false, 00:15:53.457 "data_offset": 0, 00:15:53.457 "data_size": 63488 00:15:53.457 }, 00:15:53.457 { 00:15:53.457 "name": "BaseBdev4", 00:15:53.457 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:53.457 "is_configured": true, 00:15:53.457 "data_offset": 2048, 00:15:53.457 "data_size": 63488 00:15:53.457 } 00:15:53.457 ] 00:15:53.457 }' 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.457 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.025 [2024-11-27 14:15:24.515931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.025 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.283 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.283 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.283 "name": "Existed_Raid", 00:15:54.283 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:54.283 "strip_size_kb": 0, 00:15:54.283 "state": "configuring", 00:15:54.283 "raid_level": "raid1", 00:15:54.283 "superblock": true, 00:15:54.283 "num_base_bdevs": 4, 00:15:54.283 "num_base_bdevs_discovered": 3, 00:15:54.283 "num_base_bdevs_operational": 4, 00:15:54.283 "base_bdevs_list": [ 00:15:54.283 { 00:15:54.283 "name": "BaseBdev1", 00:15:54.283 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:54.283 "is_configured": true, 00:15:54.283 "data_offset": 2048, 00:15:54.283 "data_size": 63488 00:15:54.283 }, 00:15:54.283 { 00:15:54.283 "name": null, 00:15:54.283 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:54.283 "is_configured": false, 00:15:54.283 "data_offset": 0, 00:15:54.283 "data_size": 63488 00:15:54.283 }, 00:15:54.283 { 00:15:54.283 "name": "BaseBdev3", 00:15:54.283 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:54.283 "is_configured": true, 00:15:54.283 "data_offset": 2048, 00:15:54.283 "data_size": 63488 00:15:54.283 }, 00:15:54.283 { 00:15:54.283 "name": "BaseBdev4", 00:15:54.283 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:54.283 "is_configured": true, 00:15:54.283 "data_offset": 2048, 00:15:54.283 "data_size": 63488 00:15:54.283 } 00:15:54.283 ] 00:15:54.283 }' 00:15:54.283 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.283 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.855 [2024-11-27 14:15:25.136301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.855 "name": "Existed_Raid", 00:15:54.855 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:54.855 "strip_size_kb": 0, 00:15:54.855 "state": "configuring", 00:15:54.855 "raid_level": "raid1", 00:15:54.855 "superblock": true, 00:15:54.855 "num_base_bdevs": 4, 00:15:54.855 "num_base_bdevs_discovered": 2, 00:15:54.855 "num_base_bdevs_operational": 4, 00:15:54.855 "base_bdevs_list": [ 00:15:54.855 { 00:15:54.855 "name": null, 00:15:54.855 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:54.855 "is_configured": false, 00:15:54.855 "data_offset": 0, 00:15:54.855 "data_size": 63488 00:15:54.855 }, 00:15:54.855 { 00:15:54.855 "name": null, 00:15:54.855 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:54.855 "is_configured": false, 00:15:54.855 "data_offset": 0, 00:15:54.855 "data_size": 63488 00:15:54.855 }, 00:15:54.855 { 00:15:54.855 "name": "BaseBdev3", 00:15:54.855 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:54.855 "is_configured": true, 00:15:54.855 "data_offset": 2048, 00:15:54.855 "data_size": 63488 00:15:54.855 }, 00:15:54.855 { 00:15:54.855 "name": "BaseBdev4", 00:15:54.855 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:54.855 "is_configured": true, 00:15:54.855 "data_offset": 2048, 00:15:54.855 "data_size": 63488 00:15:54.855 } 00:15:54.855 ] 00:15:54.855 }' 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.855 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.422 [2024-11-27 14:15:25.835926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.422 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.422 "name": "Existed_Raid", 00:15:55.422 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:55.422 "strip_size_kb": 0, 00:15:55.422 "state": "configuring", 00:15:55.422 "raid_level": "raid1", 00:15:55.422 "superblock": true, 00:15:55.422 "num_base_bdevs": 4, 00:15:55.422 "num_base_bdevs_discovered": 3, 00:15:55.423 "num_base_bdevs_operational": 4, 00:15:55.423 "base_bdevs_list": [ 00:15:55.423 { 00:15:55.423 "name": null, 00:15:55.423 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:55.423 "is_configured": false, 00:15:55.423 "data_offset": 0, 00:15:55.423 "data_size": 63488 00:15:55.423 }, 00:15:55.423 { 00:15:55.423 "name": "BaseBdev2", 00:15:55.423 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:55.423 "is_configured": true, 00:15:55.423 "data_offset": 2048, 00:15:55.423 "data_size": 63488 00:15:55.423 }, 00:15:55.423 { 00:15:55.423 "name": "BaseBdev3", 00:15:55.423 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:55.423 "is_configured": true, 00:15:55.423 "data_offset": 2048, 00:15:55.423 "data_size": 63488 00:15:55.423 }, 00:15:55.423 { 00:15:55.423 "name": "BaseBdev4", 00:15:55.423 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:55.423 "is_configured": true, 00:15:55.423 "data_offset": 2048, 00:15:55.423 "data_size": 63488 00:15:55.423 } 00:15:55.423 ] 00:15:55.423 }' 00:15:55.423 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.423 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4144d98c-f516-4889-a1df-6de4757304ff 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.991 [2024-11-27 14:15:26.490690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:55.991 [2024-11-27 14:15:26.491344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:55.991 [2024-11-27 14:15:26.491378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.991 NewBaseBdev 00:15:55.991 [2024-11-27 14:15:26.491800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:55.991 [2024-11-27 14:15:26.492084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:55.991 [2024-11-27 14:15:26.492108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:55.991 [2024-11-27 14:15:26.492294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.991 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.249 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.249 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:56.249 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.249 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.249 [ 00:15:56.249 { 00:15:56.249 "name": "NewBaseBdev", 00:15:56.249 "aliases": [ 00:15:56.249 "4144d98c-f516-4889-a1df-6de4757304ff" 00:15:56.249 ], 00:15:56.249 "product_name": "Malloc disk", 00:15:56.249 "block_size": 512, 00:15:56.249 "num_blocks": 65536, 00:15:56.249 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:56.249 "assigned_rate_limits": { 00:15:56.249 "rw_ios_per_sec": 0, 00:15:56.249 "rw_mbytes_per_sec": 0, 00:15:56.249 "r_mbytes_per_sec": 0, 00:15:56.249 "w_mbytes_per_sec": 0 00:15:56.249 }, 00:15:56.249 "claimed": true, 00:15:56.249 "claim_type": "exclusive_write", 00:15:56.249 "zoned": false, 00:15:56.249 "supported_io_types": { 00:15:56.249 "read": true, 00:15:56.249 "write": true, 00:15:56.249 "unmap": true, 00:15:56.249 "flush": true, 00:15:56.249 "reset": true, 00:15:56.249 "nvme_admin": false, 00:15:56.249 "nvme_io": false, 00:15:56.249 "nvme_io_md": false, 00:15:56.249 "write_zeroes": true, 00:15:56.249 "zcopy": true, 00:15:56.249 "get_zone_info": false, 00:15:56.249 "zone_management": false, 00:15:56.249 "zone_append": false, 00:15:56.249 "compare": false, 00:15:56.249 "compare_and_write": false, 00:15:56.249 "abort": true, 00:15:56.249 "seek_hole": false, 00:15:56.249 "seek_data": false, 00:15:56.249 "copy": true, 00:15:56.250 "nvme_iov_md": false 00:15:56.250 }, 00:15:56.250 "memory_domains": [ 00:15:56.250 { 00:15:56.250 "dma_device_id": "system", 00:15:56.250 "dma_device_type": 1 00:15:56.250 }, 00:15:56.250 { 00:15:56.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.250 "dma_device_type": 2 00:15:56.250 } 00:15:56.250 ], 00:15:56.250 "driver_specific": {} 00:15:56.250 } 00:15:56.250 ] 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.250 "name": "Existed_Raid", 00:15:56.250 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:56.250 "strip_size_kb": 0, 00:15:56.250 "state": "online", 00:15:56.250 "raid_level": "raid1", 00:15:56.250 "superblock": true, 00:15:56.250 "num_base_bdevs": 4, 00:15:56.250 "num_base_bdevs_discovered": 4, 00:15:56.250 "num_base_bdevs_operational": 4, 00:15:56.250 "base_bdevs_list": [ 00:15:56.250 { 00:15:56.250 "name": "NewBaseBdev", 00:15:56.250 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:56.250 "is_configured": true, 00:15:56.250 "data_offset": 2048, 00:15:56.250 "data_size": 63488 00:15:56.250 }, 00:15:56.250 { 00:15:56.250 "name": "BaseBdev2", 00:15:56.250 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:56.250 "is_configured": true, 00:15:56.250 "data_offset": 2048, 00:15:56.250 "data_size": 63488 00:15:56.250 }, 00:15:56.250 { 00:15:56.250 "name": "BaseBdev3", 00:15:56.250 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:56.250 "is_configured": true, 00:15:56.250 "data_offset": 2048, 00:15:56.250 "data_size": 63488 00:15:56.250 }, 00:15:56.250 { 00:15:56.250 "name": "BaseBdev4", 00:15:56.250 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:56.250 "is_configured": true, 00:15:56.250 "data_offset": 2048, 00:15:56.250 "data_size": 63488 00:15:56.250 } 00:15:56.250 ] 00:15:56.250 }' 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.250 14:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.818 [2024-11-27 14:15:27.047454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.818 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.818 "name": "Existed_Raid", 00:15:56.818 "aliases": [ 00:15:56.818 "f6baabae-d2b8-4621-9622-1ec161ba78c3" 00:15:56.818 ], 00:15:56.818 "product_name": "Raid Volume", 00:15:56.818 "block_size": 512, 00:15:56.818 "num_blocks": 63488, 00:15:56.818 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:56.818 "assigned_rate_limits": { 00:15:56.818 "rw_ios_per_sec": 0, 00:15:56.818 "rw_mbytes_per_sec": 0, 00:15:56.818 "r_mbytes_per_sec": 0, 00:15:56.818 "w_mbytes_per_sec": 0 00:15:56.818 }, 00:15:56.818 "claimed": false, 00:15:56.818 "zoned": false, 00:15:56.818 "supported_io_types": { 00:15:56.818 "read": true, 00:15:56.818 "write": true, 00:15:56.818 "unmap": false, 00:15:56.818 "flush": false, 00:15:56.818 "reset": true, 00:15:56.818 "nvme_admin": false, 00:15:56.818 "nvme_io": false, 00:15:56.818 "nvme_io_md": false, 00:15:56.818 "write_zeroes": true, 00:15:56.818 "zcopy": false, 00:15:56.818 "get_zone_info": false, 00:15:56.818 "zone_management": false, 00:15:56.818 "zone_append": false, 00:15:56.818 "compare": false, 00:15:56.818 "compare_and_write": false, 00:15:56.818 "abort": false, 00:15:56.818 "seek_hole": false, 00:15:56.818 "seek_data": false, 00:15:56.818 "copy": false, 00:15:56.818 "nvme_iov_md": false 00:15:56.819 }, 00:15:56.819 "memory_domains": [ 00:15:56.819 { 00:15:56.819 "dma_device_id": "system", 00:15:56.819 "dma_device_type": 1 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.819 "dma_device_type": 2 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "system", 00:15:56.819 "dma_device_type": 1 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.819 "dma_device_type": 2 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "system", 00:15:56.819 "dma_device_type": 1 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.819 "dma_device_type": 2 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "system", 00:15:56.819 "dma_device_type": 1 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.819 "dma_device_type": 2 00:15:56.819 } 00:15:56.819 ], 00:15:56.819 "driver_specific": { 00:15:56.819 "raid": { 00:15:56.819 "uuid": "f6baabae-d2b8-4621-9622-1ec161ba78c3", 00:15:56.819 "strip_size_kb": 0, 00:15:56.819 "state": "online", 00:15:56.819 "raid_level": "raid1", 00:15:56.819 "superblock": true, 00:15:56.819 "num_base_bdevs": 4, 00:15:56.819 "num_base_bdevs_discovered": 4, 00:15:56.819 "num_base_bdevs_operational": 4, 00:15:56.819 "base_bdevs_list": [ 00:15:56.819 { 00:15:56.819 "name": "NewBaseBdev", 00:15:56.819 "uuid": "4144d98c-f516-4889-a1df-6de4757304ff", 00:15:56.819 "is_configured": true, 00:15:56.819 "data_offset": 2048, 00:15:56.819 "data_size": 63488 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "name": "BaseBdev2", 00:15:56.819 "uuid": "bbba3cb4-fc3a-4c14-902a-3bd2dbb790fb", 00:15:56.819 "is_configured": true, 00:15:56.819 "data_offset": 2048, 00:15:56.819 "data_size": 63488 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "name": "BaseBdev3", 00:15:56.819 "uuid": "2b17b8b5-1279-472e-b32d-36247f6c9142", 00:15:56.819 "is_configured": true, 00:15:56.819 "data_offset": 2048, 00:15:56.819 "data_size": 63488 00:15:56.819 }, 00:15:56.819 { 00:15:56.819 "name": "BaseBdev4", 00:15:56.819 "uuid": "6b9c2226-e5b1-4e8a-8c80-dae173ca7fad", 00:15:56.819 "is_configured": true, 00:15:56.819 "data_offset": 2048, 00:15:56.819 "data_size": 63488 00:15:56.819 } 00:15:56.819 ] 00:15:56.819 } 00:15:56.819 } 00:15:56.819 }' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:56.819 BaseBdev2 00:15:56.819 BaseBdev3 00:15:56.819 BaseBdev4' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.819 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.078 [2024-11-27 14:15:27.419113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.078 [2024-11-27 14:15:27.419557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.078 [2024-11-27 14:15:27.419811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.078 [2024-11-27 14:15:27.420290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.078 [2024-11-27 14:15:27.420314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74188 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74188 ']' 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74188 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74188 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74188' 00:15:57.078 killing process with pid 74188 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74188 00:15:57.078 [2024-11-27 14:15:27.465266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.078 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74188 00:15:57.646 [2024-11-27 14:15:27.852465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.583 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:58.583 00:15:58.583 real 0m13.069s 00:15:58.583 user 0m21.316s 00:15:58.583 sys 0m2.005s 00:15:58.583 14:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.583 14:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 ************************************ 00:15:58.583 END TEST raid_state_function_test_sb 00:15:58.583 ************************************ 00:15:58.583 14:15:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:58.583 14:15:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:58.583 14:15:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.583 14:15:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 ************************************ 00:15:58.583 START TEST raid_superblock_test 00:15:58.583 ************************************ 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74875 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74875 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74875 ']' 00:15:58.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.583 14:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.845 [2024-11-27 14:15:29.178098] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:15:58.845 [2024-11-27 14:15:29.178277] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74875 ] 00:15:58.845 [2024-11-27 14:15:29.355593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.105 [2024-11-27 14:15:29.505653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.364 [2024-11-27 14:15:29.719702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.364 [2024-11-27 14:15:29.719783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.950 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.950 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:59.950 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:59.950 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.950 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 malloc1 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 [2024-11-27 14:15:30.237562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.951 [2024-11-27 14:15:30.238027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.951 [2024-11-27 14:15:30.238131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.951 [2024-11-27 14:15:30.238262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.951 [2024-11-27 14:15:30.241089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.951 [2024-11-27 14:15:30.241260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.951 pt1 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 malloc2 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 [2024-11-27 14:15:30.294558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.951 [2024-11-27 14:15:30.294904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.951 [2024-11-27 14:15:30.294987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.951 [2024-11-27 14:15:30.295110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.951 [2024-11-27 14:15:30.298432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.951 [2024-11-27 14:15:30.298633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.951 pt2 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 malloc3 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 [2024-11-27 14:15:30.363950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.951 [2024-11-27 14:15:30.364054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.951 [2024-11-27 14:15:30.364087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:59.951 [2024-11-27 14:15:30.364102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.951 [2024-11-27 14:15:30.367088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.951 [2024-11-27 14:15:30.367157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.951 pt3 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 malloc4 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 [2024-11-27 14:15:30.424508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:59.951 [2024-11-27 14:15:30.424623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.951 [2024-11-27 14:15:30.424663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:59.951 [2024-11-27 14:15:30.424678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.951 [2024-11-27 14:15:30.427883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.951 [2024-11-27 14:15:30.427924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:59.951 pt4 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:59.951 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 [2024-11-27 14:15:30.436565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.952 [2024-11-27 14:15:30.439107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.952 [2024-11-27 14:15:30.439199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.952 [2024-11-27 14:15:30.439289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:59.952 [2024-11-27 14:15:30.439529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.952 [2024-11-27 14:15:30.439551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.952 [2024-11-27 14:15:30.439888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:59.952 [2024-11-27 14:15:30.440209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.952 [2024-11-27 14:15:30.440266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.952 [2024-11-27 14:15:30.440524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.211 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.211 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.211 "name": "raid_bdev1", 00:16:00.211 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:00.211 "strip_size_kb": 0, 00:16:00.211 "state": "online", 00:16:00.211 "raid_level": "raid1", 00:16:00.211 "superblock": true, 00:16:00.211 "num_base_bdevs": 4, 00:16:00.211 "num_base_bdevs_discovered": 4, 00:16:00.211 "num_base_bdevs_operational": 4, 00:16:00.211 "base_bdevs_list": [ 00:16:00.211 { 00:16:00.211 "name": "pt1", 00:16:00.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.211 "is_configured": true, 00:16:00.211 "data_offset": 2048, 00:16:00.211 "data_size": 63488 00:16:00.211 }, 00:16:00.211 { 00:16:00.211 "name": "pt2", 00:16:00.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.211 "is_configured": true, 00:16:00.211 "data_offset": 2048, 00:16:00.211 "data_size": 63488 00:16:00.211 }, 00:16:00.211 { 00:16:00.211 "name": "pt3", 00:16:00.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.211 "is_configured": true, 00:16:00.211 "data_offset": 2048, 00:16:00.211 "data_size": 63488 00:16:00.211 }, 00:16:00.211 { 00:16:00.211 "name": "pt4", 00:16:00.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.211 "is_configured": true, 00:16:00.211 "data_offset": 2048, 00:16:00.211 "data_size": 63488 00:16:00.211 } 00:16:00.211 ] 00:16:00.211 }' 00:16:00.211 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.211 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.474 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.474 [2024-11-27 14:15:30.965255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.755 14:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.755 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.755 "name": "raid_bdev1", 00:16:00.755 "aliases": [ 00:16:00.755 "7f9ec0ed-1d7f-427f-9f85-48aea7428e86" 00:16:00.755 ], 00:16:00.755 "product_name": "Raid Volume", 00:16:00.755 "block_size": 512, 00:16:00.755 "num_blocks": 63488, 00:16:00.755 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:00.755 "assigned_rate_limits": { 00:16:00.755 "rw_ios_per_sec": 0, 00:16:00.755 "rw_mbytes_per_sec": 0, 00:16:00.755 "r_mbytes_per_sec": 0, 00:16:00.755 "w_mbytes_per_sec": 0 00:16:00.755 }, 00:16:00.755 "claimed": false, 00:16:00.755 "zoned": false, 00:16:00.755 "supported_io_types": { 00:16:00.755 "read": true, 00:16:00.755 "write": true, 00:16:00.755 "unmap": false, 00:16:00.755 "flush": false, 00:16:00.755 "reset": true, 00:16:00.755 "nvme_admin": false, 00:16:00.755 "nvme_io": false, 00:16:00.755 "nvme_io_md": false, 00:16:00.755 "write_zeroes": true, 00:16:00.755 "zcopy": false, 00:16:00.755 "get_zone_info": false, 00:16:00.755 "zone_management": false, 00:16:00.755 "zone_append": false, 00:16:00.755 "compare": false, 00:16:00.755 "compare_and_write": false, 00:16:00.755 "abort": false, 00:16:00.755 "seek_hole": false, 00:16:00.755 "seek_data": false, 00:16:00.755 "copy": false, 00:16:00.755 "nvme_iov_md": false 00:16:00.755 }, 00:16:00.755 "memory_domains": [ 00:16:00.755 { 00:16:00.755 "dma_device_id": "system", 00:16:00.755 "dma_device_type": 1 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.755 "dma_device_type": 2 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "system", 00:16:00.755 "dma_device_type": 1 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.755 "dma_device_type": 2 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "system", 00:16:00.755 "dma_device_type": 1 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.755 "dma_device_type": 2 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "system", 00:16:00.755 "dma_device_type": 1 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.755 "dma_device_type": 2 00:16:00.755 } 00:16:00.755 ], 00:16:00.755 "driver_specific": { 00:16:00.755 "raid": { 00:16:00.755 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:00.755 "strip_size_kb": 0, 00:16:00.755 "state": "online", 00:16:00.755 "raid_level": "raid1", 00:16:00.755 "superblock": true, 00:16:00.755 "num_base_bdevs": 4, 00:16:00.755 "num_base_bdevs_discovered": 4, 00:16:00.755 "num_base_bdevs_operational": 4, 00:16:00.755 "base_bdevs_list": [ 00:16:00.755 { 00:16:00.755 "name": "pt1", 00:16:00.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.755 "is_configured": true, 00:16:00.755 "data_offset": 2048, 00:16:00.755 "data_size": 63488 00:16:00.755 }, 00:16:00.755 { 00:16:00.755 "name": "pt2", 00:16:00.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.755 "is_configured": true, 00:16:00.755 "data_offset": 2048, 00:16:00.755 "data_size": 63488 00:16:00.756 }, 00:16:00.756 { 00:16:00.756 "name": "pt3", 00:16:00.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.756 "is_configured": true, 00:16:00.756 "data_offset": 2048, 00:16:00.756 "data_size": 63488 00:16:00.756 }, 00:16:00.756 { 00:16:00.756 "name": "pt4", 00:16:00.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.756 "is_configured": true, 00:16:00.756 "data_offset": 2048, 00:16:00.756 "data_size": 63488 00:16:00.756 } 00:16:00.756 ] 00:16:00.756 } 00:16:00.756 } 00:16:00.756 }' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:00.756 pt2 00:16:00.756 pt3 00:16:00.756 pt4' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.756 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 [2024-11-27 14:15:31.329134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f9ec0ed-1d7f-427f-9f85-48aea7428e86 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7f9ec0ed-1d7f-427f-9f85-48aea7428e86 ']' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 [2024-11-27 14:15:31.372790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.015 [2024-11-27 14:15:31.372821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.015 [2024-11-27 14:15:31.372959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.015 [2024-11-27 14:15:31.373071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.015 [2024-11-27 14:15:31.373096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.015 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.274 [2024-11-27 14:15:31.532971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:01.274 [2024-11-27 14:15:31.535835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:01.274 [2024-11-27 14:15:31.535925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:01.274 [2024-11-27 14:15:31.535986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:01.274 [2024-11-27 14:15:31.536069] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:01.274 [2024-11-27 14:15:31.536149] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:01.274 [2024-11-27 14:15:31.536183] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:01.274 [2024-11-27 14:15:31.536214] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:01.274 [2024-11-27 14:15:31.536237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.274 [2024-11-27 14:15:31.536254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:01.274 request: 00:16:01.274 { 00:16:01.274 "name": "raid_bdev1", 00:16:01.274 "raid_level": "raid1", 00:16:01.274 "base_bdevs": [ 00:16:01.274 "malloc1", 00:16:01.274 "malloc2", 00:16:01.274 "malloc3", 00:16:01.274 "malloc4" 00:16:01.274 ], 00:16:01.274 "superblock": false, 00:16:01.274 "method": "bdev_raid_create", 00:16:01.274 "req_id": 1 00:16:01.274 } 00:16:01.274 Got JSON-RPC error response 00:16:01.274 response: 00:16:01.274 { 00:16:01.274 "code": -17, 00:16:01.274 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:01.274 } 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.274 [2024-11-27 14:15:31.605042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.274 [2024-11-27 14:15:31.605153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.274 [2024-11-27 14:15:31.605200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:01.274 [2024-11-27 14:15:31.605219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.274 [2024-11-27 14:15:31.608450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.274 [2024-11-27 14:15:31.608506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.274 [2024-11-27 14:15:31.608636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.274 [2024-11-27 14:15:31.608725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.274 pt1 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.274 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.274 "name": "raid_bdev1", 00:16:01.274 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:01.274 "strip_size_kb": 0, 00:16:01.274 "state": "configuring", 00:16:01.274 "raid_level": "raid1", 00:16:01.274 "superblock": true, 00:16:01.274 "num_base_bdevs": 4, 00:16:01.274 "num_base_bdevs_discovered": 1, 00:16:01.274 "num_base_bdevs_operational": 4, 00:16:01.274 "base_bdevs_list": [ 00:16:01.274 { 00:16:01.274 "name": "pt1", 00:16:01.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.274 "is_configured": true, 00:16:01.274 "data_offset": 2048, 00:16:01.274 "data_size": 63488 00:16:01.274 }, 00:16:01.274 { 00:16:01.274 "name": null, 00:16:01.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.274 "is_configured": false, 00:16:01.274 "data_offset": 2048, 00:16:01.274 "data_size": 63488 00:16:01.274 }, 00:16:01.274 { 00:16:01.274 "name": null, 00:16:01.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.274 "is_configured": false, 00:16:01.274 "data_offset": 2048, 00:16:01.274 "data_size": 63488 00:16:01.274 }, 00:16:01.274 { 00:16:01.274 "name": null, 00:16:01.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.274 "is_configured": false, 00:16:01.274 "data_offset": 2048, 00:16:01.275 "data_size": 63488 00:16:01.275 } 00:16:01.275 ] 00:16:01.275 }' 00:16:01.275 14:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.275 14:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.843 [2024-11-27 14:15:32.153286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.843 [2024-11-27 14:15:32.153413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.843 [2024-11-27 14:15:32.153460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:01.843 [2024-11-27 14:15:32.153478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.843 [2024-11-27 14:15:32.154159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.843 [2024-11-27 14:15:32.154196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.843 [2024-11-27 14:15:32.154311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.843 [2024-11-27 14:15:32.154351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.843 pt2 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.843 [2024-11-27 14:15:32.161204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.843 "name": "raid_bdev1", 00:16:01.843 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:01.843 "strip_size_kb": 0, 00:16:01.843 "state": "configuring", 00:16:01.843 "raid_level": "raid1", 00:16:01.843 "superblock": true, 00:16:01.843 "num_base_bdevs": 4, 00:16:01.843 "num_base_bdevs_discovered": 1, 00:16:01.843 "num_base_bdevs_operational": 4, 00:16:01.843 "base_bdevs_list": [ 00:16:01.843 { 00:16:01.843 "name": "pt1", 00:16:01.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.843 "is_configured": true, 00:16:01.843 "data_offset": 2048, 00:16:01.843 "data_size": 63488 00:16:01.843 }, 00:16:01.843 { 00:16:01.843 "name": null, 00:16:01.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.843 "is_configured": false, 00:16:01.843 "data_offset": 0, 00:16:01.843 "data_size": 63488 00:16:01.843 }, 00:16:01.843 { 00:16:01.843 "name": null, 00:16:01.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.843 "is_configured": false, 00:16:01.843 "data_offset": 2048, 00:16:01.843 "data_size": 63488 00:16:01.843 }, 00:16:01.843 { 00:16:01.843 "name": null, 00:16:01.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.843 "is_configured": false, 00:16:01.843 "data_offset": 2048, 00:16:01.843 "data_size": 63488 00:16:01.843 } 00:16:01.843 ] 00:16:01.843 }' 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.843 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.412 [2024-11-27 14:15:32.689438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.412 [2024-11-27 14:15:32.689545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.412 [2024-11-27 14:15:32.689578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:02.412 [2024-11-27 14:15:32.689593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.412 [2024-11-27 14:15:32.690412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.412 [2024-11-27 14:15:32.690446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.412 [2024-11-27 14:15:32.690575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:02.412 [2024-11-27 14:15:32.690610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.412 pt2 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.412 [2024-11-27 14:15:32.697338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:02.412 [2024-11-27 14:15:32.697400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.412 [2024-11-27 14:15:32.697426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:02.412 [2024-11-27 14:15:32.697439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.412 [2024-11-27 14:15:32.697909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.412 [2024-11-27 14:15:32.697941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:02.412 [2024-11-27 14:15:32.698068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:02.412 [2024-11-27 14:15:32.698107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:02.412 pt3 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.412 [2024-11-27 14:15:32.705313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:02.412 [2024-11-27 14:15:32.705377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.412 [2024-11-27 14:15:32.705402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:02.412 [2024-11-27 14:15:32.705414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.412 [2024-11-27 14:15:32.705895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.412 [2024-11-27 14:15:32.705925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:02.412 [2024-11-27 14:15:32.706014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:02.412 [2024-11-27 14:15:32.706076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:02.412 [2024-11-27 14:15:32.706267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:02.412 [2024-11-27 14:15:32.706283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:02.412 [2024-11-27 14:15:32.706650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:02.412 [2024-11-27 14:15:32.706856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:02.412 [2024-11-27 14:15:32.706877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:02.412 [2024-11-27 14:15:32.707048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.412 pt4 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.412 "name": "raid_bdev1", 00:16:02.412 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:02.412 "strip_size_kb": 0, 00:16:02.412 "state": "online", 00:16:02.412 "raid_level": "raid1", 00:16:02.412 "superblock": true, 00:16:02.412 "num_base_bdevs": 4, 00:16:02.412 "num_base_bdevs_discovered": 4, 00:16:02.412 "num_base_bdevs_operational": 4, 00:16:02.412 "base_bdevs_list": [ 00:16:02.412 { 00:16:02.412 "name": "pt1", 00:16:02.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.412 "is_configured": true, 00:16:02.412 "data_offset": 2048, 00:16:02.412 "data_size": 63488 00:16:02.412 }, 00:16:02.412 { 00:16:02.412 "name": "pt2", 00:16:02.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.412 "is_configured": true, 00:16:02.412 "data_offset": 2048, 00:16:02.412 "data_size": 63488 00:16:02.412 }, 00:16:02.412 { 00:16:02.412 "name": "pt3", 00:16:02.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.412 "is_configured": true, 00:16:02.412 "data_offset": 2048, 00:16:02.412 "data_size": 63488 00:16:02.412 }, 00:16:02.412 { 00:16:02.412 "name": "pt4", 00:16:02.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.412 "is_configured": true, 00:16:02.412 "data_offset": 2048, 00:16:02.412 "data_size": 63488 00:16:02.412 } 00:16:02.412 ] 00:16:02.412 }' 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.412 14:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.980 [2024-11-27 14:15:33.278002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.980 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.980 "name": "raid_bdev1", 00:16:02.980 "aliases": [ 00:16:02.980 "7f9ec0ed-1d7f-427f-9f85-48aea7428e86" 00:16:02.980 ], 00:16:02.980 "product_name": "Raid Volume", 00:16:02.980 "block_size": 512, 00:16:02.980 "num_blocks": 63488, 00:16:02.980 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:02.980 "assigned_rate_limits": { 00:16:02.980 "rw_ios_per_sec": 0, 00:16:02.980 "rw_mbytes_per_sec": 0, 00:16:02.980 "r_mbytes_per_sec": 0, 00:16:02.980 "w_mbytes_per_sec": 0 00:16:02.980 }, 00:16:02.980 "claimed": false, 00:16:02.980 "zoned": false, 00:16:02.980 "supported_io_types": { 00:16:02.980 "read": true, 00:16:02.980 "write": true, 00:16:02.980 "unmap": false, 00:16:02.980 "flush": false, 00:16:02.980 "reset": true, 00:16:02.980 "nvme_admin": false, 00:16:02.980 "nvme_io": false, 00:16:02.980 "nvme_io_md": false, 00:16:02.980 "write_zeroes": true, 00:16:02.980 "zcopy": false, 00:16:02.980 "get_zone_info": false, 00:16:02.980 "zone_management": false, 00:16:02.980 "zone_append": false, 00:16:02.980 "compare": false, 00:16:02.980 "compare_and_write": false, 00:16:02.980 "abort": false, 00:16:02.980 "seek_hole": false, 00:16:02.980 "seek_data": false, 00:16:02.980 "copy": false, 00:16:02.980 "nvme_iov_md": false 00:16:02.980 }, 00:16:02.980 "memory_domains": [ 00:16:02.980 { 00:16:02.980 "dma_device_id": "system", 00:16:02.980 "dma_device_type": 1 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.980 "dma_device_type": 2 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "system", 00:16:02.980 "dma_device_type": 1 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.980 "dma_device_type": 2 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "system", 00:16:02.980 "dma_device_type": 1 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.980 "dma_device_type": 2 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "system", 00:16:02.980 "dma_device_type": 1 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.980 "dma_device_type": 2 00:16:02.980 } 00:16:02.980 ], 00:16:02.980 "driver_specific": { 00:16:02.980 "raid": { 00:16:02.980 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:02.980 "strip_size_kb": 0, 00:16:02.980 "state": "online", 00:16:02.980 "raid_level": "raid1", 00:16:02.980 "superblock": true, 00:16:02.981 "num_base_bdevs": 4, 00:16:02.981 "num_base_bdevs_discovered": 4, 00:16:02.981 "num_base_bdevs_operational": 4, 00:16:02.981 "base_bdevs_list": [ 00:16:02.981 { 00:16:02.981 "name": "pt1", 00:16:02.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.981 "is_configured": true, 00:16:02.981 "data_offset": 2048, 00:16:02.981 "data_size": 63488 00:16:02.981 }, 00:16:02.981 { 00:16:02.981 "name": "pt2", 00:16:02.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.981 "is_configured": true, 00:16:02.981 "data_offset": 2048, 00:16:02.981 "data_size": 63488 00:16:02.981 }, 00:16:02.981 { 00:16:02.981 "name": "pt3", 00:16:02.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.981 "is_configured": true, 00:16:02.981 "data_offset": 2048, 00:16:02.981 "data_size": 63488 00:16:02.981 }, 00:16:02.981 { 00:16:02.981 "name": "pt4", 00:16:02.981 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.981 "is_configured": true, 00:16:02.981 "data_offset": 2048, 00:16:02.981 "data_size": 63488 00:16:02.981 } 00:16:02.981 ] 00:16:02.981 } 00:16:02.981 } 00:16:02.981 }' 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.981 pt2 00:16:02.981 pt3 00:16:02.981 pt4' 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.981 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 [2024-11-27 14:15:33.657941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7f9ec0ed-1d7f-427f-9f85-48aea7428e86 '!=' 7f9ec0ed-1d7f-427f-9f85-48aea7428e86 ']' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 [2024-11-27 14:15:33.705640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.499 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.499 "name": "raid_bdev1", 00:16:03.499 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:03.499 "strip_size_kb": 0, 00:16:03.499 "state": "online", 00:16:03.499 "raid_level": "raid1", 00:16:03.499 "superblock": true, 00:16:03.499 "num_base_bdevs": 4, 00:16:03.499 "num_base_bdevs_discovered": 3, 00:16:03.499 "num_base_bdevs_operational": 3, 00:16:03.499 "base_bdevs_list": [ 00:16:03.499 { 00:16:03.499 "name": null, 00:16:03.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.499 "is_configured": false, 00:16:03.499 "data_offset": 0, 00:16:03.499 "data_size": 63488 00:16:03.499 }, 00:16:03.499 { 00:16:03.499 "name": "pt2", 00:16:03.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.499 "is_configured": true, 00:16:03.499 "data_offset": 2048, 00:16:03.499 "data_size": 63488 00:16:03.499 }, 00:16:03.499 { 00:16:03.499 "name": "pt3", 00:16:03.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.499 "is_configured": true, 00:16:03.499 "data_offset": 2048, 00:16:03.499 "data_size": 63488 00:16:03.499 }, 00:16:03.499 { 00:16:03.499 "name": "pt4", 00:16:03.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.499 "is_configured": true, 00:16:03.499 "data_offset": 2048, 00:16:03.499 "data_size": 63488 00:16:03.499 } 00:16:03.499 ] 00:16:03.499 }' 00:16:03.499 14:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.499 14:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.758 [2024-11-27 14:15:34.201751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.758 [2024-11-27 14:15:34.202112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.758 [2024-11-27 14:15:34.202260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.758 [2024-11-27 14:15:34.202380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.758 [2024-11-27 14:15:34.202399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.758 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.017 [2024-11-27 14:15:34.293696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.017 [2024-11-27 14:15:34.293985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.017 [2024-11-27 14:15:34.294028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:04.017 [2024-11-27 14:15:34.294073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.017 [2024-11-27 14:15:34.297023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.017 [2024-11-27 14:15:34.297209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.017 [2024-11-27 14:15:34.297330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.017 [2024-11-27 14:15:34.297401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.017 pt2 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.017 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.018 "name": "raid_bdev1", 00:16:04.018 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:04.018 "strip_size_kb": 0, 00:16:04.018 "state": "configuring", 00:16:04.018 "raid_level": "raid1", 00:16:04.018 "superblock": true, 00:16:04.018 "num_base_bdevs": 4, 00:16:04.018 "num_base_bdevs_discovered": 1, 00:16:04.018 "num_base_bdevs_operational": 3, 00:16:04.018 "base_bdevs_list": [ 00:16:04.018 { 00:16:04.018 "name": null, 00:16:04.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.018 "is_configured": false, 00:16:04.018 "data_offset": 2048, 00:16:04.018 "data_size": 63488 00:16:04.018 }, 00:16:04.018 { 00:16:04.018 "name": "pt2", 00:16:04.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.018 "is_configured": true, 00:16:04.018 "data_offset": 2048, 00:16:04.018 "data_size": 63488 00:16:04.018 }, 00:16:04.018 { 00:16:04.018 "name": null, 00:16:04.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.018 "is_configured": false, 00:16:04.018 "data_offset": 2048, 00:16:04.018 "data_size": 63488 00:16:04.018 }, 00:16:04.018 { 00:16:04.018 "name": null, 00:16:04.018 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.018 "is_configured": false, 00:16:04.018 "data_offset": 2048, 00:16:04.018 "data_size": 63488 00:16:04.018 } 00:16:04.018 ] 00:16:04.018 }' 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.018 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.585 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.586 [2024-11-27 14:15:34.805965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:04.586 [2024-11-27 14:15:34.806095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.586 [2024-11-27 14:15:34.806134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:04.586 [2024-11-27 14:15:34.806151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.586 [2024-11-27 14:15:34.806949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.586 [2024-11-27 14:15:34.806990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:04.586 [2024-11-27 14:15:34.807113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:04.586 [2024-11-27 14:15:34.807163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:04.586 pt3 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.586 "name": "raid_bdev1", 00:16:04.586 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:04.586 "strip_size_kb": 0, 00:16:04.586 "state": "configuring", 00:16:04.586 "raid_level": "raid1", 00:16:04.586 "superblock": true, 00:16:04.586 "num_base_bdevs": 4, 00:16:04.586 "num_base_bdevs_discovered": 2, 00:16:04.586 "num_base_bdevs_operational": 3, 00:16:04.586 "base_bdevs_list": [ 00:16:04.586 { 00:16:04.586 "name": null, 00:16:04.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.586 "is_configured": false, 00:16:04.586 "data_offset": 2048, 00:16:04.586 "data_size": 63488 00:16:04.586 }, 00:16:04.586 { 00:16:04.586 "name": "pt2", 00:16:04.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.586 "is_configured": true, 00:16:04.586 "data_offset": 2048, 00:16:04.586 "data_size": 63488 00:16:04.586 }, 00:16:04.586 { 00:16:04.586 "name": "pt3", 00:16:04.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.586 "is_configured": true, 00:16:04.586 "data_offset": 2048, 00:16:04.586 "data_size": 63488 00:16:04.586 }, 00:16:04.586 { 00:16:04.586 "name": null, 00:16:04.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.586 "is_configured": false, 00:16:04.586 "data_offset": 2048, 00:16:04.586 "data_size": 63488 00:16:04.586 } 00:16:04.586 ] 00:16:04.586 }' 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.586 14:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.845 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:04.845 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:04.845 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:04.845 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:04.845 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.845 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.845 [2024-11-27 14:15:35.350096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:04.845 [2024-11-27 14:15:35.350228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.845 [2024-11-27 14:15:35.350271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:04.845 [2024-11-27 14:15:35.350288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.845 [2024-11-27 14:15:35.350964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.845 [2024-11-27 14:15:35.350988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:04.845 [2024-11-27 14:15:35.351104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:04.845 [2024-11-27 14:15:35.351137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:04.845 [2024-11-27 14:15:35.351342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:04.845 [2024-11-27 14:15:35.351357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.845 [2024-11-27 14:15:35.351639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:04.845 [2024-11-27 14:15:35.351843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:04.845 [2024-11-27 14:15:35.351865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:04.845 [2024-11-27 14:15:35.352022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.104 pt4 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.104 "name": "raid_bdev1", 00:16:05.104 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:05.104 "strip_size_kb": 0, 00:16:05.104 "state": "online", 00:16:05.104 "raid_level": "raid1", 00:16:05.104 "superblock": true, 00:16:05.104 "num_base_bdevs": 4, 00:16:05.104 "num_base_bdevs_discovered": 3, 00:16:05.104 "num_base_bdevs_operational": 3, 00:16:05.104 "base_bdevs_list": [ 00:16:05.104 { 00:16:05.104 "name": null, 00:16:05.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.104 "is_configured": false, 00:16:05.104 "data_offset": 2048, 00:16:05.104 "data_size": 63488 00:16:05.104 }, 00:16:05.104 { 00:16:05.104 "name": "pt2", 00:16:05.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.104 "is_configured": true, 00:16:05.104 "data_offset": 2048, 00:16:05.104 "data_size": 63488 00:16:05.104 }, 00:16:05.104 { 00:16:05.104 "name": "pt3", 00:16:05.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.104 "is_configured": true, 00:16:05.104 "data_offset": 2048, 00:16:05.104 "data_size": 63488 00:16:05.104 }, 00:16:05.104 { 00:16:05.104 "name": "pt4", 00:16:05.104 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.104 "is_configured": true, 00:16:05.104 "data_offset": 2048, 00:16:05.104 "data_size": 63488 00:16:05.104 } 00:16:05.104 ] 00:16:05.104 }' 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.104 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.673 [2024-11-27 14:15:35.882159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.673 [2024-11-27 14:15:35.882482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.673 [2024-11-27 14:15:35.882629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.673 [2024-11-27 14:15:35.882733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.673 [2024-11-27 14:15:35.882756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.673 [2024-11-27 14:15:35.958181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:05.673 [2024-11-27 14:15:35.958275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.673 [2024-11-27 14:15:35.958306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:05.673 [2024-11-27 14:15:35.958327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.673 [2024-11-27 14:15:35.961597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.673 [2024-11-27 14:15:35.961680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:05.673 [2024-11-27 14:15:35.961795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:05.673 [2024-11-27 14:15:35.961912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:05.673 [2024-11-27 14:15:35.962139] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:05.673 [2024-11-27 14:15:35.962166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.673 [2024-11-27 14:15:35.962189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:05.673 [2024-11-27 14:15:35.962281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.673 [2024-11-27 14:15:35.962487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:05.673 pt1 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.673 14:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.673 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.673 "name": "raid_bdev1", 00:16:05.673 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:05.673 "strip_size_kb": 0, 00:16:05.673 "state": "configuring", 00:16:05.673 "raid_level": "raid1", 00:16:05.673 "superblock": true, 00:16:05.673 "num_base_bdevs": 4, 00:16:05.673 "num_base_bdevs_discovered": 2, 00:16:05.673 "num_base_bdevs_operational": 3, 00:16:05.673 "base_bdevs_list": [ 00:16:05.673 { 00:16:05.673 "name": null, 00:16:05.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.673 "is_configured": false, 00:16:05.673 "data_offset": 2048, 00:16:05.673 "data_size": 63488 00:16:05.673 }, 00:16:05.673 { 00:16:05.673 "name": "pt2", 00:16:05.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.673 "is_configured": true, 00:16:05.673 "data_offset": 2048, 00:16:05.673 "data_size": 63488 00:16:05.673 }, 00:16:05.673 { 00:16:05.673 "name": "pt3", 00:16:05.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.673 "is_configured": true, 00:16:05.673 "data_offset": 2048, 00:16:05.673 "data_size": 63488 00:16:05.673 }, 00:16:05.673 { 00:16:05.673 "name": null, 00:16:05.673 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.673 "is_configured": false, 00:16:05.674 "data_offset": 2048, 00:16:05.674 "data_size": 63488 00:16:05.674 } 00:16:05.674 ] 00:16:05.674 }' 00:16:05.674 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.674 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 [2024-11-27 14:15:36.554446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:06.241 [2024-11-27 14:15:36.554849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.241 [2024-11-27 14:15:36.554905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:06.241 [2024-11-27 14:15:36.554923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.241 [2024-11-27 14:15:36.555594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.241 [2024-11-27 14:15:36.555634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:06.241 [2024-11-27 14:15:36.555749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:06.241 [2024-11-27 14:15:36.555782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:06.241 [2024-11-27 14:15:36.556001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:06.241 [2024-11-27 14:15:36.556031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.241 [2024-11-27 14:15:36.556374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:06.241 [2024-11-27 14:15:36.556554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:06.241 [2024-11-27 14:15:36.556573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:06.241 [2024-11-27 14:15:36.556738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.241 pt4 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.241 "name": "raid_bdev1", 00:16:06.241 "uuid": "7f9ec0ed-1d7f-427f-9f85-48aea7428e86", 00:16:06.241 "strip_size_kb": 0, 00:16:06.241 "state": "online", 00:16:06.241 "raid_level": "raid1", 00:16:06.241 "superblock": true, 00:16:06.241 "num_base_bdevs": 4, 00:16:06.241 "num_base_bdevs_discovered": 3, 00:16:06.241 "num_base_bdevs_operational": 3, 00:16:06.241 "base_bdevs_list": [ 00:16:06.241 { 00:16:06.241 "name": null, 00:16:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.241 "is_configured": false, 00:16:06.241 "data_offset": 2048, 00:16:06.241 "data_size": 63488 00:16:06.241 }, 00:16:06.241 { 00:16:06.241 "name": "pt2", 00:16:06.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.241 "is_configured": true, 00:16:06.241 "data_offset": 2048, 00:16:06.241 "data_size": 63488 00:16:06.241 }, 00:16:06.241 { 00:16:06.241 "name": "pt3", 00:16:06.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.241 "is_configured": true, 00:16:06.241 "data_offset": 2048, 00:16:06.241 "data_size": 63488 00:16:06.241 }, 00:16:06.241 { 00:16:06.241 "name": "pt4", 00:16:06.241 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.241 "is_configured": true, 00:16:06.241 "data_offset": 2048, 00:16:06.241 "data_size": 63488 00:16:06.241 } 00:16:06.241 ] 00:16:06.241 }' 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.241 14:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.869 [2024-11-27 14:15:37.151030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7f9ec0ed-1d7f-427f-9f85-48aea7428e86 '!=' 7f9ec0ed-1d7f-427f-9f85-48aea7428e86 ']' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74875 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74875 ']' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74875 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74875 00:16:06.869 killing process with pid 74875 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74875' 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74875 00:16:06.869 [2024-11-27 14:15:37.225115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.869 14:15:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74875 00:16:06.869 [2024-11-27 14:15:37.225250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.869 [2024-11-27 14:15:37.225367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.869 [2024-11-27 14:15:37.225389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:07.143 [2024-11-27 14:15:37.557826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.519 14:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:08.519 00:16:08.519 real 0m9.542s 00:16:08.519 user 0m15.628s 00:16:08.519 sys 0m1.459s 00:16:08.519 ************************************ 00:16:08.519 END TEST raid_superblock_test 00:16:08.519 ************************************ 00:16:08.519 14:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.519 14:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.519 14:15:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:08.519 14:15:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:08.519 14:15:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.519 14:15:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.519 ************************************ 00:16:08.519 START TEST raid_read_error_test 00:16:08.519 ************************************ 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CAA6VqrQ4H 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75368 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75368 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75368 ']' 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.519 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.520 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.520 14:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.520 [2024-11-27 14:15:38.788956] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:16:08.520 [2024-11-27 14:15:38.789106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75368 ] 00:16:08.520 [2024-11-27 14:15:38.961854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.778 [2024-11-27 14:15:39.101888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.037 [2024-11-27 14:15:39.317604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.037 [2024-11-27 14:15:39.317706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.296 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.296 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:09.296 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.296 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.296 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.296 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 BaseBdev1_malloc 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 true 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 [2024-11-27 14:15:39.833738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:09.555 [2024-11-27 14:15:39.833807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.555 [2024-11-27 14:15:39.833856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:09.555 [2024-11-27 14:15:39.833874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.555 [2024-11-27 14:15:39.836648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.555 [2024-11-27 14:15:39.836693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.555 BaseBdev1 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 BaseBdev2_malloc 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 true 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 [2024-11-27 14:15:39.891216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:09.555 [2024-11-27 14:15:39.891298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.555 [2024-11-27 14:15:39.891323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:09.555 [2024-11-27 14:15:39.891339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.555 [2024-11-27 14:15:39.894194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.555 [2024-11-27 14:15:39.894241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.555 BaseBdev2 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.555 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 BaseBdev3_malloc 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 true 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 [2024-11-27 14:15:39.968176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:09.556 [2024-11-27 14:15:39.968252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.556 [2024-11-27 14:15:39.968277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:09.556 [2024-11-27 14:15:39.968294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.556 [2024-11-27 14:15:39.971224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.556 [2024-11-27 14:15:39.971267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.556 BaseBdev3 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 BaseBdev4_malloc 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 true 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 [2024-11-27 14:15:40.029901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:09.556 [2024-11-27 14:15:40.030000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.556 [2024-11-27 14:15:40.030026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:09.556 [2024-11-27 14:15:40.030051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.556 [2024-11-27 14:15:40.032935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.556 [2024-11-27 14:15:40.032989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:09.556 BaseBdev4 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 [2024-11-27 14:15:40.041993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.556 [2024-11-27 14:15:40.044455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.556 [2024-11-27 14:15:40.044566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.556 [2024-11-27 14:15:40.044671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.556 [2024-11-27 14:15:40.044982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:09.556 [2024-11-27 14:15:40.045026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:09.556 [2024-11-27 14:15:40.045301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:09.556 [2024-11-27 14:15:40.045516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:09.556 [2024-11-27 14:15:40.045538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:09.556 [2024-11-27 14:15:40.045706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.814 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.814 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.814 "name": "raid_bdev1", 00:16:09.814 "uuid": "390cfe05-0b35-435b-9c68-47917677f0f9", 00:16:09.814 "strip_size_kb": 0, 00:16:09.814 "state": "online", 00:16:09.814 "raid_level": "raid1", 00:16:09.814 "superblock": true, 00:16:09.814 "num_base_bdevs": 4, 00:16:09.814 "num_base_bdevs_discovered": 4, 00:16:09.814 "num_base_bdevs_operational": 4, 00:16:09.814 "base_bdevs_list": [ 00:16:09.814 { 00:16:09.814 "name": "BaseBdev1", 00:16:09.814 "uuid": "b21f3ddd-8886-5ce8-b637-db9bad9eb86b", 00:16:09.814 "is_configured": true, 00:16:09.814 "data_offset": 2048, 00:16:09.814 "data_size": 63488 00:16:09.814 }, 00:16:09.814 { 00:16:09.814 "name": "BaseBdev2", 00:16:09.814 "uuid": "954c57af-c3ed-5974-be59-f203157e94e8", 00:16:09.814 "is_configured": true, 00:16:09.814 "data_offset": 2048, 00:16:09.814 "data_size": 63488 00:16:09.814 }, 00:16:09.814 { 00:16:09.814 "name": "BaseBdev3", 00:16:09.814 "uuid": "04e87ebe-df3a-5f42-bca3-d168b147d1a2", 00:16:09.814 "is_configured": true, 00:16:09.814 "data_offset": 2048, 00:16:09.814 "data_size": 63488 00:16:09.814 }, 00:16:09.814 { 00:16:09.814 "name": "BaseBdev4", 00:16:09.814 "uuid": "d232eabe-d0e8-53c9-b146-44e8516d6e44", 00:16:09.814 "is_configured": true, 00:16:09.814 "data_offset": 2048, 00:16:09.814 "data_size": 63488 00:16:09.814 } 00:16:09.814 ] 00:16:09.814 }' 00:16:09.814 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.814 14:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.072 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:10.072 14:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:10.331 [2024-11-27 14:15:40.691596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.267 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.268 "name": "raid_bdev1", 00:16:11.268 "uuid": "390cfe05-0b35-435b-9c68-47917677f0f9", 00:16:11.268 "strip_size_kb": 0, 00:16:11.268 "state": "online", 00:16:11.268 "raid_level": "raid1", 00:16:11.268 "superblock": true, 00:16:11.268 "num_base_bdevs": 4, 00:16:11.268 "num_base_bdevs_discovered": 4, 00:16:11.268 "num_base_bdevs_operational": 4, 00:16:11.268 "base_bdevs_list": [ 00:16:11.268 { 00:16:11.268 "name": "BaseBdev1", 00:16:11.268 "uuid": "b21f3ddd-8886-5ce8-b637-db9bad9eb86b", 00:16:11.268 "is_configured": true, 00:16:11.268 "data_offset": 2048, 00:16:11.268 "data_size": 63488 00:16:11.268 }, 00:16:11.268 { 00:16:11.268 "name": "BaseBdev2", 00:16:11.268 "uuid": "954c57af-c3ed-5974-be59-f203157e94e8", 00:16:11.268 "is_configured": true, 00:16:11.268 "data_offset": 2048, 00:16:11.268 "data_size": 63488 00:16:11.268 }, 00:16:11.268 { 00:16:11.268 "name": "BaseBdev3", 00:16:11.268 "uuid": "04e87ebe-df3a-5f42-bca3-d168b147d1a2", 00:16:11.268 "is_configured": true, 00:16:11.268 "data_offset": 2048, 00:16:11.268 "data_size": 63488 00:16:11.268 }, 00:16:11.268 { 00:16:11.268 "name": "BaseBdev4", 00:16:11.268 "uuid": "d232eabe-d0e8-53c9-b146-44e8516d6e44", 00:16:11.268 "is_configured": true, 00:16:11.268 "data_offset": 2048, 00:16:11.268 "data_size": 63488 00:16:11.268 } 00:16:11.268 ] 00:16:11.268 }' 00:16:11.268 14:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.268 14:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.835 [2024-11-27 14:15:42.098001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.835 [2024-11-27 14:15:42.098101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.835 [2024-11-27 14:15:42.101472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.835 [2024-11-27 14:15:42.101555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.835 [2024-11-27 14:15:42.101706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.835 [2024-11-27 14:15:42.101730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:11.835 { 00:16:11.835 "results": [ 00:16:11.835 { 00:16:11.835 "job": "raid_bdev1", 00:16:11.835 "core_mask": "0x1", 00:16:11.835 "workload": "randrw", 00:16:11.835 "percentage": 50, 00:16:11.835 "status": "finished", 00:16:11.835 "queue_depth": 1, 00:16:11.835 "io_size": 131072, 00:16:11.835 "runtime": 1.404357, 00:16:11.835 "iops": 6821.627264292484, 00:16:11.835 "mibps": 852.7034080365605, 00:16:11.835 "io_failed": 0, 00:16:11.835 "io_timeout": 0, 00:16:11.835 "avg_latency_us": 142.5105940406149, 00:16:11.835 "min_latency_us": 39.79636363636364, 00:16:11.835 "max_latency_us": 2025.658181818182 00:16:11.835 } 00:16:11.835 ], 00:16:11.835 "core_count": 1 00:16:11.835 } 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75368 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75368 ']' 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75368 00:16:11.835 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75368 00:16:11.836 killing process with pid 75368 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75368' 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75368 00:16:11.836 [2024-11-27 14:15:42.136700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.836 14:15:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75368 00:16:12.094 [2024-11-27 14:15:42.411811] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CAA6VqrQ4H 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:13.028 00:16:13.028 real 0m4.832s 00:16:13.028 user 0m5.867s 00:16:13.028 sys 0m0.680s 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.028 14:15:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.028 ************************************ 00:16:13.028 END TEST raid_read_error_test 00:16:13.028 ************************************ 00:16:13.289 14:15:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:13.289 14:15:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:13.289 14:15:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.289 14:15:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.289 ************************************ 00:16:13.289 START TEST raid_write_error_test 00:16:13.289 ************************************ 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CqMD6MOKvp 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75518 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75518 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75518 ']' 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.289 14:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.289 [2024-11-27 14:15:43.674294] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:16:13.289 [2024-11-27 14:15:43.674439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75518 ] 00:16:13.555 [2024-11-27 14:15:43.849896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.555 [2024-11-27 14:15:43.997803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.813 [2024-11-27 14:15:44.211788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.813 [2024-11-27 14:15:44.211896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 BaseBdev1_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 true 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 [2024-11-27 14:15:44.719929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:14.380 [2024-11-27 14:15:44.720055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.380 [2024-11-27 14:15:44.720084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:14.380 [2024-11-27 14:15:44.720102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.380 [2024-11-27 14:15:44.723124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.380 [2024-11-27 14:15:44.723205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.380 BaseBdev1 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 BaseBdev2_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 true 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 [2024-11-27 14:15:44.781581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:14.380 [2024-11-27 14:15:44.781669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.380 [2024-11-27 14:15:44.781694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:14.380 [2024-11-27 14:15:44.781711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.380 [2024-11-27 14:15:44.784609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.380 [2024-11-27 14:15:44.784675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:14.380 BaseBdev2 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 BaseBdev3_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.380 true 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.380 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.381 [2024-11-27 14:15:44.855764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:14.381 [2024-11-27 14:15:44.855872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.381 [2024-11-27 14:15:44.855901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:14.381 [2024-11-27 14:15:44.855920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.381 [2024-11-27 14:15:44.858975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.381 [2024-11-27 14:15:44.859062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:14.381 BaseBdev3 00:16:14.381 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.381 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.381 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:14.381 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.381 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.640 BaseBdev4_malloc 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.640 true 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.640 [2024-11-27 14:15:44.922921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:14.640 [2024-11-27 14:15:44.923022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.640 [2024-11-27 14:15:44.923049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:14.640 [2024-11-27 14:15:44.923067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.640 [2024-11-27 14:15:44.926157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.640 [2024-11-27 14:15:44.926230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:14.640 BaseBdev4 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.640 [2024-11-27 14:15:44.931117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.640 [2024-11-27 14:15:44.933780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.640 [2024-11-27 14:15:44.933931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.640 [2024-11-27 14:15:44.934036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.640 [2024-11-27 14:15:44.934381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:14.640 [2024-11-27 14:15:44.934418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.640 [2024-11-27 14:15:44.934789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:14.640 [2024-11-27 14:15:44.935064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:14.640 [2024-11-27 14:15:44.935091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:14.640 [2024-11-27 14:15:44.935326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.640 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.641 "name": "raid_bdev1", 00:16:14.641 "uuid": "97aee86d-19f1-4e8f-bb15-4b3bbefc6923", 00:16:14.641 "strip_size_kb": 0, 00:16:14.641 "state": "online", 00:16:14.641 "raid_level": "raid1", 00:16:14.641 "superblock": true, 00:16:14.641 "num_base_bdevs": 4, 00:16:14.641 "num_base_bdevs_discovered": 4, 00:16:14.641 "num_base_bdevs_operational": 4, 00:16:14.641 "base_bdevs_list": [ 00:16:14.641 { 00:16:14.641 "name": "BaseBdev1", 00:16:14.641 "uuid": "556ebe16-d391-5403-821b-0d48f2809ceb", 00:16:14.641 "is_configured": true, 00:16:14.641 "data_offset": 2048, 00:16:14.641 "data_size": 63488 00:16:14.641 }, 00:16:14.641 { 00:16:14.641 "name": "BaseBdev2", 00:16:14.641 "uuid": "80d828dd-3e44-56fb-b993-5b6fe159eac2", 00:16:14.641 "is_configured": true, 00:16:14.641 "data_offset": 2048, 00:16:14.641 "data_size": 63488 00:16:14.641 }, 00:16:14.641 { 00:16:14.641 "name": "BaseBdev3", 00:16:14.641 "uuid": "da9408a4-ed99-5149-be1f-1bd61b95fb7c", 00:16:14.641 "is_configured": true, 00:16:14.641 "data_offset": 2048, 00:16:14.641 "data_size": 63488 00:16:14.641 }, 00:16:14.641 { 00:16:14.641 "name": "BaseBdev4", 00:16:14.641 "uuid": "2e9de357-0dd1-52c5-ad14-b881ce5f7889", 00:16:14.641 "is_configured": true, 00:16:14.641 "data_offset": 2048, 00:16:14.641 "data_size": 63488 00:16:14.641 } 00:16:14.641 ] 00:16:14.641 }' 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.641 14:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.209 14:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:15.210 14:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:15.210 [2024-11-27 14:15:45.564933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.146 [2024-11-27 14:15:46.444418] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:16.146 [2024-11-27 14:15:46.444543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.146 [2024-11-27 14:15:46.444885] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.146 "name": "raid_bdev1", 00:16:16.146 "uuid": "97aee86d-19f1-4e8f-bb15-4b3bbefc6923", 00:16:16.146 "strip_size_kb": 0, 00:16:16.146 "state": "online", 00:16:16.146 "raid_level": "raid1", 00:16:16.146 "superblock": true, 00:16:16.146 "num_base_bdevs": 4, 00:16:16.146 "num_base_bdevs_discovered": 3, 00:16:16.146 "num_base_bdevs_operational": 3, 00:16:16.146 "base_bdevs_list": [ 00:16:16.146 { 00:16:16.146 "name": null, 00:16:16.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.146 "is_configured": false, 00:16:16.146 "data_offset": 0, 00:16:16.146 "data_size": 63488 00:16:16.146 }, 00:16:16.146 { 00:16:16.146 "name": "BaseBdev2", 00:16:16.146 "uuid": "80d828dd-3e44-56fb-b993-5b6fe159eac2", 00:16:16.146 "is_configured": true, 00:16:16.146 "data_offset": 2048, 00:16:16.146 "data_size": 63488 00:16:16.146 }, 00:16:16.146 { 00:16:16.146 "name": "BaseBdev3", 00:16:16.146 "uuid": "da9408a4-ed99-5149-be1f-1bd61b95fb7c", 00:16:16.146 "is_configured": true, 00:16:16.146 "data_offset": 2048, 00:16:16.146 "data_size": 63488 00:16:16.146 }, 00:16:16.146 { 00:16:16.146 "name": "BaseBdev4", 00:16:16.146 "uuid": "2e9de357-0dd1-52c5-ad14-b881ce5f7889", 00:16:16.146 "is_configured": true, 00:16:16.146 "data_offset": 2048, 00:16:16.146 "data_size": 63488 00:16:16.146 } 00:16:16.146 ] 00:16:16.146 }' 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.146 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.713 14:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.713 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.713 14:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.713 [2024-11-27 14:15:47.001422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.713 [2024-11-27 14:15:47.001488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.713 [2024-11-27 14:15:47.004729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.713 [2024-11-27 14:15:47.004824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.713 [2024-11-27 14:15:47.004982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.713 [2024-11-27 14:15:47.005003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:16.713 { 00:16:16.713 "results": [ 00:16:16.713 { 00:16:16.713 "job": "raid_bdev1", 00:16:16.713 "core_mask": "0x1", 00:16:16.713 "workload": "randrw", 00:16:16.713 "percentage": 50, 00:16:16.713 "status": "finished", 00:16:16.713 "queue_depth": 1, 00:16:16.713 "io_size": 131072, 00:16:16.713 "runtime": 1.434098, 00:16:16.713 "iops": 7668.2346673658285, 00:16:16.713 "mibps": 958.5293334207286, 00:16:16.713 "io_failed": 0, 00:16:16.713 "io_timeout": 0, 00:16:16.713 "avg_latency_us": 126.36877164846611, 00:16:16.713 "min_latency_us": 39.33090909090909, 00:16:16.713 "max_latency_us": 1601.1636363636364 00:16:16.713 } 00:16:16.713 ], 00:16:16.713 "core_count": 1 00:16:16.713 } 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75518 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75518 ']' 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75518 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75518 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.713 killing process with pid 75518 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75518' 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75518 00:16:16.713 [2024-11-27 14:15:47.039036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.713 14:15:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75518 00:16:16.971 [2024-11-27 14:15:47.333122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CqMD6MOKvp 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:18.348 00:16:18.348 real 0m4.926s 00:16:18.348 user 0m5.958s 00:16:18.348 sys 0m0.685s 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.348 14:15:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.348 ************************************ 00:16:18.348 END TEST raid_write_error_test 00:16:18.348 ************************************ 00:16:18.348 14:15:48 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:16:18.348 14:15:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:18.348 14:15:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:18.348 14:15:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:18.348 14:15:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.348 14:15:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.348 ************************************ 00:16:18.348 START TEST raid_rebuild_test 00:16:18.348 ************************************ 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75663 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75663 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75663 ']' 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.348 14:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.348 [2024-11-27 14:15:48.671963] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:16:18.348 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:18.348 Zero copy mechanism will not be used. 00:16:18.348 [2024-11-27 14:15:48.672158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75663 ] 00:16:18.348 [2024-11-27 14:15:48.855811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.606 [2024-11-27 14:15:48.993468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.870 [2024-11-27 14:15:49.213654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.870 [2024-11-27 14:15:49.213724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 BaseBdev1_malloc 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 [2024-11-27 14:15:49.718968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:19.439 [2024-11-27 14:15:49.719097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.439 [2024-11-27 14:15:49.719128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:19.439 [2024-11-27 14:15:49.719148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.439 [2024-11-27 14:15:49.722074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.439 [2024-11-27 14:15:49.722127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:19.439 BaseBdev1 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 BaseBdev2_malloc 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 [2024-11-27 14:15:49.777371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:19.439 [2024-11-27 14:15:49.777474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.439 [2024-11-27 14:15:49.777510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:19.439 [2024-11-27 14:15:49.777528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.439 [2024-11-27 14:15:49.780443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.439 [2024-11-27 14:15:49.780508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:19.439 BaseBdev2 00:16:19.439 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.440 spare_malloc 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.440 spare_delay 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.440 [2024-11-27 14:15:49.842478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.440 [2024-11-27 14:15:49.842596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.440 [2024-11-27 14:15:49.842641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:19.440 [2024-11-27 14:15:49.842660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.440 [2024-11-27 14:15:49.845524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.440 [2024-11-27 14:15:49.845588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.440 spare 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.440 [2024-11-27 14:15:49.850623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.440 [2024-11-27 14:15:49.853164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.440 [2024-11-27 14:15:49.853303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:19.440 [2024-11-27 14:15:49.853326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:19.440 [2024-11-27 14:15:49.853693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:19.440 [2024-11-27 14:15:49.853949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:19.440 [2024-11-27 14:15:49.853983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:19.440 [2024-11-27 14:15:49.854247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.440 "name": "raid_bdev1", 00:16:19.440 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:19.440 "strip_size_kb": 0, 00:16:19.440 "state": "online", 00:16:19.440 "raid_level": "raid1", 00:16:19.440 "superblock": false, 00:16:19.440 "num_base_bdevs": 2, 00:16:19.440 "num_base_bdevs_discovered": 2, 00:16:19.440 "num_base_bdevs_operational": 2, 00:16:19.440 "base_bdevs_list": [ 00:16:19.440 { 00:16:19.440 "name": "BaseBdev1", 00:16:19.440 "uuid": "54faa348-ecac-55ac-89f1-0e505f1f8573", 00:16:19.440 "is_configured": true, 00:16:19.440 "data_offset": 0, 00:16:19.440 "data_size": 65536 00:16:19.440 }, 00:16:19.440 { 00:16:19.440 "name": "BaseBdev2", 00:16:19.440 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:19.440 "is_configured": true, 00:16:19.440 "data_offset": 0, 00:16:19.440 "data_size": 65536 00:16:19.440 } 00:16:19.440 ] 00:16:19.440 }' 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.440 14:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.007 [2024-11-27 14:15:50.367249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.007 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:20.266 [2024-11-27 14:15:50.695133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:20.266 /dev/nbd0 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.266 1+0 records in 00:16:20.266 1+0 records out 00:16:20.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330685 s, 12.4 MB/s 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:20.266 14:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:28.375 65536+0 records in 00:16:28.375 65536+0 records out 00:16:28.375 33554432 bytes (34 MB, 32 MiB) copied, 6.74932 s, 5.0 MB/s 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:28.375 [2024-11-27 14:15:57.789809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.375 [2024-11-27 14:15:57.821891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.375 "name": "raid_bdev1", 00:16:28.375 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:28.375 "strip_size_kb": 0, 00:16:28.375 "state": "online", 00:16:28.375 "raid_level": "raid1", 00:16:28.375 "superblock": false, 00:16:28.375 "num_base_bdevs": 2, 00:16:28.375 "num_base_bdevs_discovered": 1, 00:16:28.375 "num_base_bdevs_operational": 1, 00:16:28.375 "base_bdevs_list": [ 00:16:28.375 { 00:16:28.375 "name": null, 00:16:28.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.375 "is_configured": false, 00:16:28.375 "data_offset": 0, 00:16:28.375 "data_size": 65536 00:16:28.375 }, 00:16:28.375 { 00:16:28.375 "name": "BaseBdev2", 00:16:28.375 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:28.375 "is_configured": true, 00:16:28.375 "data_offset": 0, 00:16:28.375 "data_size": 65536 00:16:28.375 } 00:16:28.375 ] 00:16:28.375 }' 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.375 14:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.375 14:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.375 14:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.375 14:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.376 [2024-11-27 14:15:58.318114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.376 [2024-11-27 14:15:58.335006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:16:28.376 14:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.376 14:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.376 [2024-11-27 14:15:58.337674] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.942 "name": "raid_bdev1", 00:16:28.942 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:28.942 "strip_size_kb": 0, 00:16:28.942 "state": "online", 00:16:28.942 "raid_level": "raid1", 00:16:28.942 "superblock": false, 00:16:28.942 "num_base_bdevs": 2, 00:16:28.942 "num_base_bdevs_discovered": 2, 00:16:28.942 "num_base_bdevs_operational": 2, 00:16:28.942 "process": { 00:16:28.942 "type": "rebuild", 00:16:28.942 "target": "spare", 00:16:28.942 "progress": { 00:16:28.942 "blocks": 20480, 00:16:28.942 "percent": 31 00:16:28.942 } 00:16:28.942 }, 00:16:28.942 "base_bdevs_list": [ 00:16:28.942 { 00:16:28.942 "name": "spare", 00:16:28.942 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:28.942 "is_configured": true, 00:16:28.942 "data_offset": 0, 00:16:28.942 "data_size": 65536 00:16:28.942 }, 00:16:28.942 { 00:16:28.942 "name": "BaseBdev2", 00:16:28.942 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:28.942 "is_configured": true, 00:16:28.942 "data_offset": 0, 00:16:28.942 "data_size": 65536 00:16:28.942 } 00:16:28.942 ] 00:16:28.942 }' 00:16:28.942 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.201 [2024-11-27 14:15:59.516126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.201 [2024-11-27 14:15:59.548121] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.201 [2024-11-27 14:15:59.548216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.201 [2024-11-27 14:15:59.548243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.201 [2024-11-27 14:15:59.548260] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.201 "name": "raid_bdev1", 00:16:29.201 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:29.201 "strip_size_kb": 0, 00:16:29.201 "state": "online", 00:16:29.201 "raid_level": "raid1", 00:16:29.201 "superblock": false, 00:16:29.201 "num_base_bdevs": 2, 00:16:29.201 "num_base_bdevs_discovered": 1, 00:16:29.201 "num_base_bdevs_operational": 1, 00:16:29.201 "base_bdevs_list": [ 00:16:29.201 { 00:16:29.201 "name": null, 00:16:29.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.201 "is_configured": false, 00:16:29.201 "data_offset": 0, 00:16:29.201 "data_size": 65536 00:16:29.201 }, 00:16:29.201 { 00:16:29.201 "name": "BaseBdev2", 00:16:29.201 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:29.201 "is_configured": true, 00:16:29.201 "data_offset": 0, 00:16:29.201 "data_size": 65536 00:16:29.201 } 00:16:29.201 ] 00:16:29.201 }' 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.201 14:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.769 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.769 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.769 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.769 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.769 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.770 "name": "raid_bdev1", 00:16:29.770 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:29.770 "strip_size_kb": 0, 00:16:29.770 "state": "online", 00:16:29.770 "raid_level": "raid1", 00:16:29.770 "superblock": false, 00:16:29.770 "num_base_bdevs": 2, 00:16:29.770 "num_base_bdevs_discovered": 1, 00:16:29.770 "num_base_bdevs_operational": 1, 00:16:29.770 "base_bdevs_list": [ 00:16:29.770 { 00:16:29.770 "name": null, 00:16:29.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.770 "is_configured": false, 00:16:29.770 "data_offset": 0, 00:16:29.770 "data_size": 65536 00:16:29.770 }, 00:16:29.770 { 00:16:29.770 "name": "BaseBdev2", 00:16:29.770 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:29.770 "is_configured": true, 00:16:29.770 "data_offset": 0, 00:16:29.770 "data_size": 65536 00:16:29.770 } 00:16:29.770 ] 00:16:29.770 }' 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 [2024-11-27 14:16:00.253396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.770 [2024-11-27 14:16:00.269095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 14:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:29.770 [2024-11-27 14:16:00.271995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.146 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.146 "name": "raid_bdev1", 00:16:31.146 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:31.146 "strip_size_kb": 0, 00:16:31.146 "state": "online", 00:16:31.146 "raid_level": "raid1", 00:16:31.146 "superblock": false, 00:16:31.146 "num_base_bdevs": 2, 00:16:31.147 "num_base_bdevs_discovered": 2, 00:16:31.147 "num_base_bdevs_operational": 2, 00:16:31.147 "process": { 00:16:31.147 "type": "rebuild", 00:16:31.147 "target": "spare", 00:16:31.147 "progress": { 00:16:31.147 "blocks": 20480, 00:16:31.147 "percent": 31 00:16:31.147 } 00:16:31.147 }, 00:16:31.147 "base_bdevs_list": [ 00:16:31.147 { 00:16:31.147 "name": "spare", 00:16:31.147 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:31.147 "is_configured": true, 00:16:31.147 "data_offset": 0, 00:16:31.147 "data_size": 65536 00:16:31.147 }, 00:16:31.147 { 00:16:31.147 "name": "BaseBdev2", 00:16:31.147 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:31.147 "is_configured": true, 00:16:31.147 "data_offset": 0, 00:16:31.147 "data_size": 65536 00:16:31.147 } 00:16:31.147 ] 00:16:31.147 }' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.147 "name": "raid_bdev1", 00:16:31.147 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:31.147 "strip_size_kb": 0, 00:16:31.147 "state": "online", 00:16:31.147 "raid_level": "raid1", 00:16:31.147 "superblock": false, 00:16:31.147 "num_base_bdevs": 2, 00:16:31.147 "num_base_bdevs_discovered": 2, 00:16:31.147 "num_base_bdevs_operational": 2, 00:16:31.147 "process": { 00:16:31.147 "type": "rebuild", 00:16:31.147 "target": "spare", 00:16:31.147 "progress": { 00:16:31.147 "blocks": 22528, 00:16:31.147 "percent": 34 00:16:31.147 } 00:16:31.147 }, 00:16:31.147 "base_bdevs_list": [ 00:16:31.147 { 00:16:31.147 "name": "spare", 00:16:31.147 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:31.147 "is_configured": true, 00:16:31.147 "data_offset": 0, 00:16:31.147 "data_size": 65536 00:16:31.147 }, 00:16:31.147 { 00:16:31.147 "name": "BaseBdev2", 00:16:31.147 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:31.147 "is_configured": true, 00:16:31.147 "data_offset": 0, 00:16:31.147 "data_size": 65536 00:16:31.147 } 00:16:31.147 ] 00:16:31.147 }' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.147 14:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.524 "name": "raid_bdev1", 00:16:32.524 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:32.524 "strip_size_kb": 0, 00:16:32.524 "state": "online", 00:16:32.524 "raid_level": "raid1", 00:16:32.524 "superblock": false, 00:16:32.524 "num_base_bdevs": 2, 00:16:32.524 "num_base_bdevs_discovered": 2, 00:16:32.524 "num_base_bdevs_operational": 2, 00:16:32.524 "process": { 00:16:32.524 "type": "rebuild", 00:16:32.524 "target": "spare", 00:16:32.524 "progress": { 00:16:32.524 "blocks": 47104, 00:16:32.524 "percent": 71 00:16:32.524 } 00:16:32.524 }, 00:16:32.524 "base_bdevs_list": [ 00:16:32.524 { 00:16:32.524 "name": "spare", 00:16:32.524 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:32.524 "is_configured": true, 00:16:32.524 "data_offset": 0, 00:16:32.524 "data_size": 65536 00:16:32.524 }, 00:16:32.524 { 00:16:32.524 "name": "BaseBdev2", 00:16:32.524 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:32.524 "is_configured": true, 00:16:32.524 "data_offset": 0, 00:16:32.524 "data_size": 65536 00:16:32.524 } 00:16:32.524 ] 00:16:32.524 }' 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.524 14:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.090 [2024-11-27 14:16:03.502640] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:33.090 [2024-11-27 14:16:03.502771] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:33.090 [2024-11-27 14:16:03.502868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.349 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.349 "name": "raid_bdev1", 00:16:33.349 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:33.349 "strip_size_kb": 0, 00:16:33.349 "state": "online", 00:16:33.349 "raid_level": "raid1", 00:16:33.349 "superblock": false, 00:16:33.349 "num_base_bdevs": 2, 00:16:33.349 "num_base_bdevs_discovered": 2, 00:16:33.349 "num_base_bdevs_operational": 2, 00:16:33.350 "base_bdevs_list": [ 00:16:33.350 { 00:16:33.350 "name": "spare", 00:16:33.350 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:33.350 "is_configured": true, 00:16:33.350 "data_offset": 0, 00:16:33.350 "data_size": 65536 00:16:33.350 }, 00:16:33.350 { 00:16:33.350 "name": "BaseBdev2", 00:16:33.350 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:33.350 "is_configured": true, 00:16:33.350 "data_offset": 0, 00:16:33.350 "data_size": 65536 00:16:33.350 } 00:16:33.350 ] 00:16:33.350 }' 00:16:33.350 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.608 14:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.608 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.608 "name": "raid_bdev1", 00:16:33.608 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:33.608 "strip_size_kb": 0, 00:16:33.608 "state": "online", 00:16:33.608 "raid_level": "raid1", 00:16:33.608 "superblock": false, 00:16:33.608 "num_base_bdevs": 2, 00:16:33.608 "num_base_bdevs_discovered": 2, 00:16:33.608 "num_base_bdevs_operational": 2, 00:16:33.608 "base_bdevs_list": [ 00:16:33.608 { 00:16:33.608 "name": "spare", 00:16:33.608 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:33.608 "is_configured": true, 00:16:33.608 "data_offset": 0, 00:16:33.608 "data_size": 65536 00:16:33.608 }, 00:16:33.608 { 00:16:33.608 "name": "BaseBdev2", 00:16:33.608 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:33.608 "is_configured": true, 00:16:33.608 "data_offset": 0, 00:16:33.608 "data_size": 65536 00:16:33.608 } 00:16:33.608 ] 00:16:33.608 }' 00:16:33.608 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.608 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.608 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.867 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.867 "name": "raid_bdev1", 00:16:33.867 "uuid": "18894316-5996-4ab2-a95a-9e6cc2a442ea", 00:16:33.867 "strip_size_kb": 0, 00:16:33.867 "state": "online", 00:16:33.867 "raid_level": "raid1", 00:16:33.867 "superblock": false, 00:16:33.867 "num_base_bdevs": 2, 00:16:33.867 "num_base_bdevs_discovered": 2, 00:16:33.867 "num_base_bdevs_operational": 2, 00:16:33.867 "base_bdevs_list": [ 00:16:33.867 { 00:16:33.867 "name": "spare", 00:16:33.867 "uuid": "f564232f-7ed8-5e84-a0ed-f042fa3ff651", 00:16:33.867 "is_configured": true, 00:16:33.867 "data_offset": 0, 00:16:33.867 "data_size": 65536 00:16:33.867 }, 00:16:33.867 { 00:16:33.867 "name": "BaseBdev2", 00:16:33.867 "uuid": "fbec2a96-159a-5583-b5c3-ba1ded25ab68", 00:16:33.867 "is_configured": true, 00:16:33.867 "data_offset": 0, 00:16:33.867 "data_size": 65536 00:16:33.867 } 00:16:33.867 ] 00:16:33.867 }' 00:16:33.868 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.868 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.434 [2024-11-27 14:16:04.649328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.434 [2024-11-27 14:16:04.649397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.434 [2024-11-27 14:16:04.649519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.434 [2024-11-27 14:16:04.649619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.434 [2024-11-27 14:16:04.649637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.434 14:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:34.693 /dev/nbd0 00:16:34.693 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.693 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.693 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.694 1+0 records in 00:16:34.694 1+0 records out 00:16:34.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305905 s, 13.4 MB/s 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.694 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:34.953 /dev/nbd1 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.953 1+0 records in 00:16:34.953 1+0 records out 00:16:34.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392061 s, 10.4 MB/s 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.953 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.212 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.470 14:16:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75663 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75663 ']' 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75663 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75663 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75663' 00:16:35.728 killing process with pid 75663 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75663 00:16:35.728 Received shutdown signal, test time was about 60.000000 seconds 00:16:35.728 00:16:35.728 Latency(us) 00:16:35.728 [2024-11-27T14:16:06.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.728 [2024-11-27T14:16:06.241Z] =================================================================================================================== 00:16:35.728 [2024-11-27T14:16:06.241Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:35.728 [2024-11-27 14:16:06.085728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.728 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75663 00:16:35.986 [2024-11-27 14:16:06.360062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:37.362 00:16:37.362 real 0m18.887s 00:16:37.362 user 0m20.811s 00:16:37.362 sys 0m3.711s 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.362 ************************************ 00:16:37.362 END TEST raid_rebuild_test 00:16:37.362 ************************************ 00:16:37.362 14:16:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:16:37.362 14:16:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:37.362 14:16:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.362 14:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.362 ************************************ 00:16:37.362 START TEST raid_rebuild_test_sb 00:16:37.362 ************************************ 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.362 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76120 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76120 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76120 ']' 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.363 14:16:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.363 [2024-11-27 14:16:07.596785] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:16:37.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:37.363 Zero copy mechanism will not be used. 00:16:37.363 [2024-11-27 14:16:07.597006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76120 ] 00:16:37.363 [2024-11-27 14:16:07.772736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.621 [2024-11-27 14:16:07.922216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.880 [2024-11-27 14:16:08.136498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.880 [2024-11-27 14:16:08.136588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.138 BaseBdev1_malloc 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.138 [2024-11-27 14:16:08.576376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:38.138 [2024-11-27 14:16:08.576479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.138 [2024-11-27 14:16:08.576514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.138 [2024-11-27 14:16:08.576535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.138 [2024-11-27 14:16:08.579587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.138 [2024-11-27 14:16:08.579670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.138 BaseBdev1 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.138 BaseBdev2_malloc 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.138 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:38.139 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.139 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.139 [2024-11-27 14:16:08.634530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:38.139 [2024-11-27 14:16:08.634636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.139 [2024-11-27 14:16:08.634672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:38.139 [2024-11-27 14:16:08.634693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.139 [2024-11-27 14:16:08.637839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.139 [2024-11-27 14:16:08.637924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:38.139 BaseBdev2 00:16:38.139 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.139 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:38.139 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.139 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 spare_malloc 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 spare_delay 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 [2024-11-27 14:16:08.713797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.397 [2024-11-27 14:16:08.713923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.397 [2024-11-27 14:16:08.713956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:38.397 [2024-11-27 14:16:08.713987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.397 [2024-11-27 14:16:08.717124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.397 [2024-11-27 14:16:08.717174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.397 spare 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 [2024-11-27 14:16:08.721869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.397 [2024-11-27 14:16:08.724636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.397 [2024-11-27 14:16:08.724895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:38.397 [2024-11-27 14:16:08.724930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.397 [2024-11-27 14:16:08.725270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:38.397 [2024-11-27 14:16:08.725516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:38.397 [2024-11-27 14:16:08.725540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:38.397 [2024-11-27 14:16:08.725811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.397 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.397 "name": "raid_bdev1", 00:16:38.397 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:38.397 "strip_size_kb": 0, 00:16:38.397 "state": "online", 00:16:38.397 "raid_level": "raid1", 00:16:38.397 "superblock": true, 00:16:38.397 "num_base_bdevs": 2, 00:16:38.397 "num_base_bdevs_discovered": 2, 00:16:38.397 "num_base_bdevs_operational": 2, 00:16:38.397 "base_bdevs_list": [ 00:16:38.398 { 00:16:38.398 "name": "BaseBdev1", 00:16:38.398 "uuid": "edefdddc-a1f1-5de1-81e3-8dd7d45fec0a", 00:16:38.398 "is_configured": true, 00:16:38.398 "data_offset": 2048, 00:16:38.398 "data_size": 63488 00:16:38.398 }, 00:16:38.398 { 00:16:38.398 "name": "BaseBdev2", 00:16:38.398 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:38.398 "is_configured": true, 00:16:38.398 "data_offset": 2048, 00:16:38.398 "data_size": 63488 00:16:38.398 } 00:16:38.398 ] 00:16:38.398 }' 00:16:38.398 14:16:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.398 14:16:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:38.963 [2024-11-27 14:16:09.206526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:38.963 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:39.221 [2024-11-27 14:16:09.594311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:39.221 /dev/nbd0 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.221 1+0 records in 00:16:39.221 1+0 records out 00:16:39.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322694 s, 12.7 MB/s 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:39.221 14:16:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:45.828 63488+0 records in 00:16:45.828 63488+0 records out 00:16:45.828 32505856 bytes (33 MB, 31 MiB) copied, 6.54013 s, 5.0 MB/s 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.828 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.098 [2024-11-27 14:16:16.488855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.098 [2024-11-27 14:16:16.501385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.098 "name": "raid_bdev1", 00:16:46.098 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:46.098 "strip_size_kb": 0, 00:16:46.098 "state": "online", 00:16:46.098 "raid_level": "raid1", 00:16:46.098 "superblock": true, 00:16:46.098 "num_base_bdevs": 2, 00:16:46.098 "num_base_bdevs_discovered": 1, 00:16:46.098 "num_base_bdevs_operational": 1, 00:16:46.098 "base_bdevs_list": [ 00:16:46.098 { 00:16:46.098 "name": null, 00:16:46.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.098 "is_configured": false, 00:16:46.098 "data_offset": 0, 00:16:46.098 "data_size": 63488 00:16:46.098 }, 00:16:46.098 { 00:16:46.098 "name": "BaseBdev2", 00:16:46.098 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:46.098 "is_configured": true, 00:16:46.098 "data_offset": 2048, 00:16:46.098 "data_size": 63488 00:16:46.098 } 00:16:46.098 ] 00:16:46.098 }' 00:16:46.098 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.099 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.666 14:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.666 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.666 14:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.666 [2024-11-27 14:16:16.993637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.666 [2024-11-27 14:16:17.010587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:16:46.666 14:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.666 14:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:46.666 [2024-11-27 14:16:17.013149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.601 "name": "raid_bdev1", 00:16:47.601 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:47.601 "strip_size_kb": 0, 00:16:47.601 "state": "online", 00:16:47.601 "raid_level": "raid1", 00:16:47.601 "superblock": true, 00:16:47.601 "num_base_bdevs": 2, 00:16:47.601 "num_base_bdevs_discovered": 2, 00:16:47.601 "num_base_bdevs_operational": 2, 00:16:47.601 "process": { 00:16:47.601 "type": "rebuild", 00:16:47.601 "target": "spare", 00:16:47.601 "progress": { 00:16:47.601 "blocks": 20480, 00:16:47.601 "percent": 32 00:16:47.601 } 00:16:47.601 }, 00:16:47.601 "base_bdevs_list": [ 00:16:47.601 { 00:16:47.601 "name": "spare", 00:16:47.601 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:47.601 "is_configured": true, 00:16:47.601 "data_offset": 2048, 00:16:47.601 "data_size": 63488 00:16:47.601 }, 00:16:47.601 { 00:16:47.601 "name": "BaseBdev2", 00:16:47.601 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:47.601 "is_configured": true, 00:16:47.601 "data_offset": 2048, 00:16:47.601 "data_size": 63488 00:16:47.601 } 00:16:47.601 ] 00:16:47.601 }' 00:16:47.601 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.860 [2024-11-27 14:16:18.183485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.860 [2024-11-27 14:16:18.225214] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.860 [2024-11-27 14:16:18.225326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.860 [2024-11-27 14:16:18.225350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.860 [2024-11-27 14:16:18.225365] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.860 "name": "raid_bdev1", 00:16:47.860 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:47.860 "strip_size_kb": 0, 00:16:47.860 "state": "online", 00:16:47.860 "raid_level": "raid1", 00:16:47.860 "superblock": true, 00:16:47.860 "num_base_bdevs": 2, 00:16:47.860 "num_base_bdevs_discovered": 1, 00:16:47.860 "num_base_bdevs_operational": 1, 00:16:47.860 "base_bdevs_list": [ 00:16:47.860 { 00:16:47.860 "name": null, 00:16:47.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.860 "is_configured": false, 00:16:47.860 "data_offset": 0, 00:16:47.860 "data_size": 63488 00:16:47.860 }, 00:16:47.860 { 00:16:47.860 "name": "BaseBdev2", 00:16:47.860 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:47.860 "is_configured": true, 00:16:47.860 "data_offset": 2048, 00:16:47.860 "data_size": 63488 00:16:47.860 } 00:16:47.860 ] 00:16:47.860 }' 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.860 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.427 "name": "raid_bdev1", 00:16:48.427 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:48.427 "strip_size_kb": 0, 00:16:48.427 "state": "online", 00:16:48.427 "raid_level": "raid1", 00:16:48.427 "superblock": true, 00:16:48.427 "num_base_bdevs": 2, 00:16:48.427 "num_base_bdevs_discovered": 1, 00:16:48.427 "num_base_bdevs_operational": 1, 00:16:48.427 "base_bdevs_list": [ 00:16:48.427 { 00:16:48.427 "name": null, 00:16:48.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.427 "is_configured": false, 00:16:48.427 "data_offset": 0, 00:16:48.427 "data_size": 63488 00:16:48.427 }, 00:16:48.427 { 00:16:48.427 "name": "BaseBdev2", 00:16:48.427 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:48.427 "is_configured": true, 00:16:48.427 "data_offset": 2048, 00:16:48.427 "data_size": 63488 00:16:48.427 } 00:16:48.427 ] 00:16:48.427 }' 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.427 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.427 [2024-11-27 14:16:18.936629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.686 [2024-11-27 14:16:18.954027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:16:48.686 14:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.686 14:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.686 [2024-11-27 14:16:18.957005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.621 14:16:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.621 "name": "raid_bdev1", 00:16:49.621 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:49.621 "strip_size_kb": 0, 00:16:49.621 "state": "online", 00:16:49.621 "raid_level": "raid1", 00:16:49.621 "superblock": true, 00:16:49.621 "num_base_bdevs": 2, 00:16:49.621 "num_base_bdevs_discovered": 2, 00:16:49.621 "num_base_bdevs_operational": 2, 00:16:49.621 "process": { 00:16:49.621 "type": "rebuild", 00:16:49.621 "target": "spare", 00:16:49.621 "progress": { 00:16:49.621 "blocks": 20480, 00:16:49.621 "percent": 32 00:16:49.621 } 00:16:49.621 }, 00:16:49.621 "base_bdevs_list": [ 00:16:49.621 { 00:16:49.621 "name": "spare", 00:16:49.621 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:49.621 "is_configured": true, 00:16:49.621 "data_offset": 2048, 00:16:49.621 "data_size": 63488 00:16:49.621 }, 00:16:49.621 { 00:16:49.621 "name": "BaseBdev2", 00:16:49.621 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:49.621 "is_configured": true, 00:16:49.621 "data_offset": 2048, 00:16:49.621 "data_size": 63488 00:16:49.621 } 00:16:49.621 ] 00:16:49.621 }' 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:49.621 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=422 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.621 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.880 "name": "raid_bdev1", 00:16:49.880 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:49.880 "strip_size_kb": 0, 00:16:49.880 "state": "online", 00:16:49.880 "raid_level": "raid1", 00:16:49.880 "superblock": true, 00:16:49.880 "num_base_bdevs": 2, 00:16:49.880 "num_base_bdevs_discovered": 2, 00:16:49.880 "num_base_bdevs_operational": 2, 00:16:49.880 "process": { 00:16:49.880 "type": "rebuild", 00:16:49.880 "target": "spare", 00:16:49.880 "progress": { 00:16:49.880 "blocks": 22528, 00:16:49.880 "percent": 35 00:16:49.880 } 00:16:49.880 }, 00:16:49.880 "base_bdevs_list": [ 00:16:49.880 { 00:16:49.880 "name": "spare", 00:16:49.880 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:49.880 "is_configured": true, 00:16:49.880 "data_offset": 2048, 00:16:49.880 "data_size": 63488 00:16:49.880 }, 00:16:49.880 { 00:16:49.880 "name": "BaseBdev2", 00:16:49.880 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:49.880 "is_configured": true, 00:16:49.880 "data_offset": 2048, 00:16:49.880 "data_size": 63488 00:16:49.880 } 00:16:49.880 ] 00:16:49.880 }' 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.880 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.815 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.073 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.073 "name": "raid_bdev1", 00:16:51.073 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:51.073 "strip_size_kb": 0, 00:16:51.073 "state": "online", 00:16:51.073 "raid_level": "raid1", 00:16:51.073 "superblock": true, 00:16:51.073 "num_base_bdevs": 2, 00:16:51.073 "num_base_bdevs_discovered": 2, 00:16:51.073 "num_base_bdevs_operational": 2, 00:16:51.073 "process": { 00:16:51.073 "type": "rebuild", 00:16:51.073 "target": "spare", 00:16:51.073 "progress": { 00:16:51.073 "blocks": 47104, 00:16:51.073 "percent": 74 00:16:51.073 } 00:16:51.073 }, 00:16:51.073 "base_bdevs_list": [ 00:16:51.073 { 00:16:51.073 "name": "spare", 00:16:51.073 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:51.073 "is_configured": true, 00:16:51.073 "data_offset": 2048, 00:16:51.073 "data_size": 63488 00:16:51.073 }, 00:16:51.073 { 00:16:51.073 "name": "BaseBdev2", 00:16:51.073 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:51.073 "is_configured": true, 00:16:51.073 "data_offset": 2048, 00:16:51.073 "data_size": 63488 00:16:51.073 } 00:16:51.073 ] 00:16:51.073 }' 00:16:51.073 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.073 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.073 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.073 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.073 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.703 [2024-11-27 14:16:22.086738] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:51.703 [2024-11-27 14:16:22.086866] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:51.703 [2024-11-27 14:16:22.087027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.270 "name": "raid_bdev1", 00:16:52.270 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:52.270 "strip_size_kb": 0, 00:16:52.270 "state": "online", 00:16:52.270 "raid_level": "raid1", 00:16:52.270 "superblock": true, 00:16:52.270 "num_base_bdevs": 2, 00:16:52.270 "num_base_bdevs_discovered": 2, 00:16:52.270 "num_base_bdevs_operational": 2, 00:16:52.270 "base_bdevs_list": [ 00:16:52.270 { 00:16:52.270 "name": "spare", 00:16:52.270 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:52.270 "is_configured": true, 00:16:52.270 "data_offset": 2048, 00:16:52.270 "data_size": 63488 00:16:52.270 }, 00:16:52.270 { 00:16:52.270 "name": "BaseBdev2", 00:16:52.270 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:52.270 "is_configured": true, 00:16:52.270 "data_offset": 2048, 00:16:52.270 "data_size": 63488 00:16:52.270 } 00:16:52.270 ] 00:16:52.270 }' 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.270 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.271 "name": "raid_bdev1", 00:16:52.271 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:52.271 "strip_size_kb": 0, 00:16:52.271 "state": "online", 00:16:52.271 "raid_level": "raid1", 00:16:52.271 "superblock": true, 00:16:52.271 "num_base_bdevs": 2, 00:16:52.271 "num_base_bdevs_discovered": 2, 00:16:52.271 "num_base_bdevs_operational": 2, 00:16:52.271 "base_bdevs_list": [ 00:16:52.271 { 00:16:52.271 "name": "spare", 00:16:52.271 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:52.271 "is_configured": true, 00:16:52.271 "data_offset": 2048, 00:16:52.271 "data_size": 63488 00:16:52.271 }, 00:16:52.271 { 00:16:52.271 "name": "BaseBdev2", 00:16:52.271 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:52.271 "is_configured": true, 00:16:52.271 "data_offset": 2048, 00:16:52.271 "data_size": 63488 00:16:52.271 } 00:16:52.271 ] 00:16:52.271 }' 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.271 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.528 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.528 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.528 "name": "raid_bdev1", 00:16:52.528 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:52.528 "strip_size_kb": 0, 00:16:52.528 "state": "online", 00:16:52.528 "raid_level": "raid1", 00:16:52.528 "superblock": true, 00:16:52.528 "num_base_bdevs": 2, 00:16:52.528 "num_base_bdevs_discovered": 2, 00:16:52.528 "num_base_bdevs_operational": 2, 00:16:52.528 "base_bdevs_list": [ 00:16:52.528 { 00:16:52.528 "name": "spare", 00:16:52.528 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:52.528 "is_configured": true, 00:16:52.528 "data_offset": 2048, 00:16:52.528 "data_size": 63488 00:16:52.528 }, 00:16:52.528 { 00:16:52.528 "name": "BaseBdev2", 00:16:52.528 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:52.528 "is_configured": true, 00:16:52.528 "data_offset": 2048, 00:16:52.528 "data_size": 63488 00:16:52.528 } 00:16:52.528 ] 00:16:52.528 }' 00:16:52.528 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.528 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 [2024-11-27 14:16:23.307281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.095 [2024-11-27 14:16:23.307324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.095 [2024-11-27 14:16:23.307442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.095 [2024-11-27 14:16:23.307534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.095 [2024-11-27 14:16:23.307570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.095 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:53.354 /dev/nbd0 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.354 1+0 records in 00:16:53.354 1+0 records out 00:16:53.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261347 s, 15.7 MB/s 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.354 14:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:53.612 /dev/nbd1 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.612 1+0 records in 00:16:53.612 1+0 records out 00:16:53.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405628 s, 10.1 MB/s 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.612 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.870 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.129 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.387 [2024-11-27 14:16:24.832322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.387 [2024-11-27 14:16:24.832400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.387 [2024-11-27 14:16:24.832437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:54.387 [2024-11-27 14:16:24.832453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.387 [2024-11-27 14:16:24.835496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.387 [2024-11-27 14:16:24.835575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.387 [2024-11-27 14:16:24.835707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:54.387 [2024-11-27 14:16:24.835791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.387 [2024-11-27 14:16:24.836001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.387 spare 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.387 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.645 [2024-11-27 14:16:24.936143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:54.645 [2024-11-27 14:16:24.936190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.645 [2024-11-27 14:16:24.936546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:16:54.645 [2024-11-27 14:16:24.936804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:54.645 [2024-11-27 14:16:24.936851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:54.645 [2024-11-27 14:16:24.937078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.645 "name": "raid_bdev1", 00:16:54.645 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:54.645 "strip_size_kb": 0, 00:16:54.645 "state": "online", 00:16:54.645 "raid_level": "raid1", 00:16:54.645 "superblock": true, 00:16:54.645 "num_base_bdevs": 2, 00:16:54.645 "num_base_bdevs_discovered": 2, 00:16:54.645 "num_base_bdevs_operational": 2, 00:16:54.645 "base_bdevs_list": [ 00:16:54.645 { 00:16:54.645 "name": "spare", 00:16:54.645 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:54.645 "is_configured": true, 00:16:54.645 "data_offset": 2048, 00:16:54.645 "data_size": 63488 00:16:54.645 }, 00:16:54.645 { 00:16:54.645 "name": "BaseBdev2", 00:16:54.645 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:54.645 "is_configured": true, 00:16:54.645 "data_offset": 2048, 00:16:54.645 "data_size": 63488 00:16:54.645 } 00:16:54.645 ] 00:16:54.645 }' 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.645 14:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.210 "name": "raid_bdev1", 00:16:55.210 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:55.210 "strip_size_kb": 0, 00:16:55.210 "state": "online", 00:16:55.210 "raid_level": "raid1", 00:16:55.210 "superblock": true, 00:16:55.210 "num_base_bdevs": 2, 00:16:55.210 "num_base_bdevs_discovered": 2, 00:16:55.210 "num_base_bdevs_operational": 2, 00:16:55.210 "base_bdevs_list": [ 00:16:55.210 { 00:16:55.210 "name": "spare", 00:16:55.210 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:55.210 "is_configured": true, 00:16:55.210 "data_offset": 2048, 00:16:55.210 "data_size": 63488 00:16:55.210 }, 00:16:55.210 { 00:16:55.210 "name": "BaseBdev2", 00:16:55.210 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:55.210 "is_configured": true, 00:16:55.210 "data_offset": 2048, 00:16:55.210 "data_size": 63488 00:16:55.210 } 00:16:55.210 ] 00:16:55.210 }' 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.210 [2024-11-27 14:16:25.669295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.210 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.468 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.468 "name": "raid_bdev1", 00:16:55.468 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:55.468 "strip_size_kb": 0, 00:16:55.468 "state": "online", 00:16:55.468 "raid_level": "raid1", 00:16:55.468 "superblock": true, 00:16:55.468 "num_base_bdevs": 2, 00:16:55.468 "num_base_bdevs_discovered": 1, 00:16:55.468 "num_base_bdevs_operational": 1, 00:16:55.468 "base_bdevs_list": [ 00:16:55.468 { 00:16:55.468 "name": null, 00:16:55.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.468 "is_configured": false, 00:16:55.468 "data_offset": 0, 00:16:55.468 "data_size": 63488 00:16:55.468 }, 00:16:55.468 { 00:16:55.468 "name": "BaseBdev2", 00:16:55.468 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:55.468 "is_configured": true, 00:16:55.468 "data_offset": 2048, 00:16:55.468 "data_size": 63488 00:16:55.468 } 00:16:55.468 ] 00:16:55.468 }' 00:16:55.468 14:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.468 14:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.726 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.726 14:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.726 14:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.726 [2024-11-27 14:16:26.185571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.726 [2024-11-27 14:16:26.185804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:55.726 [2024-11-27 14:16:26.185846] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:55.726 [2024-11-27 14:16:26.185925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.726 [2024-11-27 14:16:26.202378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:55.726 14:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.726 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:55.726 [2024-11-27 14:16:26.205107] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.104 "name": "raid_bdev1", 00:16:57.104 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:57.104 "strip_size_kb": 0, 00:16:57.104 "state": "online", 00:16:57.104 "raid_level": "raid1", 00:16:57.104 "superblock": true, 00:16:57.104 "num_base_bdevs": 2, 00:16:57.104 "num_base_bdevs_discovered": 2, 00:16:57.104 "num_base_bdevs_operational": 2, 00:16:57.104 "process": { 00:16:57.104 "type": "rebuild", 00:16:57.104 "target": "spare", 00:16:57.104 "progress": { 00:16:57.104 "blocks": 20480, 00:16:57.104 "percent": 32 00:16:57.104 } 00:16:57.104 }, 00:16:57.104 "base_bdevs_list": [ 00:16:57.104 { 00:16:57.104 "name": "spare", 00:16:57.104 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:57.104 "is_configured": true, 00:16:57.104 "data_offset": 2048, 00:16:57.104 "data_size": 63488 00:16:57.104 }, 00:16:57.104 { 00:16:57.104 "name": "BaseBdev2", 00:16:57.104 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:57.104 "is_configured": true, 00:16:57.104 "data_offset": 2048, 00:16:57.104 "data_size": 63488 00:16:57.104 } 00:16:57.104 ] 00:16:57.104 }' 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.104 [2024-11-27 14:16:27.354746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.104 [2024-11-27 14:16:27.414679] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.104 [2024-11-27 14:16:27.414764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.104 [2024-11-27 14:16:27.414787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.104 [2024-11-27 14:16:27.414802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.104 "name": "raid_bdev1", 00:16:57.104 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:57.104 "strip_size_kb": 0, 00:16:57.104 "state": "online", 00:16:57.104 "raid_level": "raid1", 00:16:57.104 "superblock": true, 00:16:57.104 "num_base_bdevs": 2, 00:16:57.104 "num_base_bdevs_discovered": 1, 00:16:57.104 "num_base_bdevs_operational": 1, 00:16:57.104 "base_bdevs_list": [ 00:16:57.104 { 00:16:57.104 "name": null, 00:16:57.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.104 "is_configured": false, 00:16:57.104 "data_offset": 0, 00:16:57.104 "data_size": 63488 00:16:57.104 }, 00:16:57.104 { 00:16:57.104 "name": "BaseBdev2", 00:16:57.104 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:57.104 "is_configured": true, 00:16:57.104 "data_offset": 2048, 00:16:57.104 "data_size": 63488 00:16:57.104 } 00:16:57.104 ] 00:16:57.104 }' 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.104 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.670 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.670 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.670 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.670 [2024-11-27 14:16:27.961498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.670 [2024-11-27 14:16:27.961584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.670 [2024-11-27 14:16:27.961616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:57.670 [2024-11-27 14:16:27.961633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.670 [2024-11-27 14:16:27.962323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.670 [2024-11-27 14:16:27.962374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.670 [2024-11-27 14:16:27.962516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:57.670 [2024-11-27 14:16:27.962543] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:57.670 [2024-11-27 14:16:27.962557] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:57.670 [2024-11-27 14:16:27.962635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.670 [2024-11-27 14:16:27.978291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:57.670 spare 00:16:57.670 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.670 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:57.670 [2024-11-27 14:16:27.980906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.645 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.645 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.645 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.645 "name": "raid_bdev1", 00:16:58.645 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:58.645 "strip_size_kb": 0, 00:16:58.645 "state": "online", 00:16:58.645 "raid_level": "raid1", 00:16:58.645 "superblock": true, 00:16:58.645 "num_base_bdevs": 2, 00:16:58.645 "num_base_bdevs_discovered": 2, 00:16:58.645 "num_base_bdevs_operational": 2, 00:16:58.645 "process": { 00:16:58.645 "type": "rebuild", 00:16:58.645 "target": "spare", 00:16:58.645 "progress": { 00:16:58.645 "blocks": 20480, 00:16:58.645 "percent": 32 00:16:58.645 } 00:16:58.645 }, 00:16:58.645 "base_bdevs_list": [ 00:16:58.645 { 00:16:58.645 "name": "spare", 00:16:58.645 "uuid": "abfe2659-5602-5aef-a53e-2f964a478d0e", 00:16:58.645 "is_configured": true, 00:16:58.645 "data_offset": 2048, 00:16:58.645 "data_size": 63488 00:16:58.645 }, 00:16:58.645 { 00:16:58.645 "name": "BaseBdev2", 00:16:58.645 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:58.645 "is_configured": true, 00:16:58.645 "data_offset": 2048, 00:16:58.645 "data_size": 63488 00:16:58.645 } 00:16:58.645 ] 00:16:58.646 }' 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.646 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.646 [2024-11-27 14:16:29.146476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.905 [2024-11-27 14:16:29.190386] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:58.905 [2024-11-27 14:16:29.190464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.905 [2024-11-27 14:16:29.190494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.905 [2024-11-27 14:16:29.190506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.905 "name": "raid_bdev1", 00:16:58.905 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:58.905 "strip_size_kb": 0, 00:16:58.905 "state": "online", 00:16:58.905 "raid_level": "raid1", 00:16:58.905 "superblock": true, 00:16:58.905 "num_base_bdevs": 2, 00:16:58.905 "num_base_bdevs_discovered": 1, 00:16:58.905 "num_base_bdevs_operational": 1, 00:16:58.905 "base_bdevs_list": [ 00:16:58.905 { 00:16:58.905 "name": null, 00:16:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.905 "is_configured": false, 00:16:58.905 "data_offset": 0, 00:16:58.905 "data_size": 63488 00:16:58.905 }, 00:16:58.905 { 00:16:58.905 "name": "BaseBdev2", 00:16:58.905 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:58.905 "is_configured": true, 00:16:58.905 "data_offset": 2048, 00:16:58.905 "data_size": 63488 00:16:58.905 } 00:16:58.905 ] 00:16:58.905 }' 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.905 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.472 "name": "raid_bdev1", 00:16:59.472 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:16:59.472 "strip_size_kb": 0, 00:16:59.472 "state": "online", 00:16:59.472 "raid_level": "raid1", 00:16:59.472 "superblock": true, 00:16:59.472 "num_base_bdevs": 2, 00:16:59.472 "num_base_bdevs_discovered": 1, 00:16:59.472 "num_base_bdevs_operational": 1, 00:16:59.472 "base_bdevs_list": [ 00:16:59.472 { 00:16:59.472 "name": null, 00:16:59.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.472 "is_configured": false, 00:16:59.472 "data_offset": 0, 00:16:59.472 "data_size": 63488 00:16:59.472 }, 00:16:59.472 { 00:16:59.472 "name": "BaseBdev2", 00:16:59.472 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:16:59.472 "is_configured": true, 00:16:59.472 "data_offset": 2048, 00:16:59.472 "data_size": 63488 00:16:59.472 } 00:16:59.472 ] 00:16:59.472 }' 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.472 [2024-11-27 14:16:29.914950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:59.472 [2024-11-27 14:16:29.915023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.472 [2024-11-27 14:16:29.915065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:59.472 [2024-11-27 14:16:29.915095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.472 [2024-11-27 14:16:29.915691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.472 [2024-11-27 14:16:29.915724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:59.472 [2024-11-27 14:16:29.915866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:59.472 [2024-11-27 14:16:29.915890] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:59.472 [2024-11-27 14:16:29.915905] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:59.472 [2024-11-27 14:16:29.915919] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:59.472 BaseBdev1 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.472 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.845 "name": "raid_bdev1", 00:17:00.845 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:17:00.845 "strip_size_kb": 0, 00:17:00.845 "state": "online", 00:17:00.845 "raid_level": "raid1", 00:17:00.845 "superblock": true, 00:17:00.845 "num_base_bdevs": 2, 00:17:00.845 "num_base_bdevs_discovered": 1, 00:17:00.845 "num_base_bdevs_operational": 1, 00:17:00.845 "base_bdevs_list": [ 00:17:00.845 { 00:17:00.845 "name": null, 00:17:00.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.845 "is_configured": false, 00:17:00.845 "data_offset": 0, 00:17:00.845 "data_size": 63488 00:17:00.845 }, 00:17:00.845 { 00:17:00.845 "name": "BaseBdev2", 00:17:00.845 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:17:00.845 "is_configured": true, 00:17:00.845 "data_offset": 2048, 00:17:00.845 "data_size": 63488 00:17:00.845 } 00:17:00.845 ] 00:17:00.845 }' 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.845 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.102 "name": "raid_bdev1", 00:17:01.102 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:17:01.102 "strip_size_kb": 0, 00:17:01.102 "state": "online", 00:17:01.102 "raid_level": "raid1", 00:17:01.102 "superblock": true, 00:17:01.102 "num_base_bdevs": 2, 00:17:01.102 "num_base_bdevs_discovered": 1, 00:17:01.102 "num_base_bdevs_operational": 1, 00:17:01.102 "base_bdevs_list": [ 00:17:01.102 { 00:17:01.102 "name": null, 00:17:01.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.102 "is_configured": false, 00:17:01.102 "data_offset": 0, 00:17:01.102 "data_size": 63488 00:17:01.102 }, 00:17:01.102 { 00:17:01.102 "name": "BaseBdev2", 00:17:01.102 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:17:01.102 "is_configured": true, 00:17:01.102 "data_offset": 2048, 00:17:01.102 "data_size": 63488 00:17:01.102 } 00:17:01.102 ] 00:17:01.102 }' 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.102 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.360 [2024-11-27 14:16:31.615640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.360 [2024-11-27 14:16:31.615906] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:01.360 [2024-11-27 14:16:31.615954] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:01.360 request: 00:17:01.360 { 00:17:01.360 "base_bdev": "BaseBdev1", 00:17:01.360 "raid_bdev": "raid_bdev1", 00:17:01.360 "method": "bdev_raid_add_base_bdev", 00:17:01.360 "req_id": 1 00:17:01.360 } 00:17:01.360 Got JSON-RPC error response 00:17:01.360 response: 00:17:01.360 { 00:17:01.360 "code": -22, 00:17:01.360 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:01.360 } 00:17:01.360 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:01.360 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:01.360 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:01.360 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:01.360 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:01.360 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:02.299 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:02.299 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.299 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.299 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.300 "name": "raid_bdev1", 00:17:02.300 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:17:02.300 "strip_size_kb": 0, 00:17:02.300 "state": "online", 00:17:02.300 "raid_level": "raid1", 00:17:02.300 "superblock": true, 00:17:02.300 "num_base_bdevs": 2, 00:17:02.300 "num_base_bdevs_discovered": 1, 00:17:02.300 "num_base_bdevs_operational": 1, 00:17:02.300 "base_bdevs_list": [ 00:17:02.300 { 00:17:02.300 "name": null, 00:17:02.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.300 "is_configured": false, 00:17:02.300 "data_offset": 0, 00:17:02.300 "data_size": 63488 00:17:02.300 }, 00:17:02.300 { 00:17:02.300 "name": "BaseBdev2", 00:17:02.300 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:17:02.300 "is_configured": true, 00:17:02.300 "data_offset": 2048, 00:17:02.300 "data_size": 63488 00:17:02.300 } 00:17:02.300 ] 00:17:02.300 }' 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.300 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.866 "name": "raid_bdev1", 00:17:02.866 "uuid": "397013b5-c2f2-4711-9548-d34c46090028", 00:17:02.866 "strip_size_kb": 0, 00:17:02.866 "state": "online", 00:17:02.866 "raid_level": "raid1", 00:17:02.866 "superblock": true, 00:17:02.866 "num_base_bdevs": 2, 00:17:02.866 "num_base_bdevs_discovered": 1, 00:17:02.866 "num_base_bdevs_operational": 1, 00:17:02.866 "base_bdevs_list": [ 00:17:02.866 { 00:17:02.866 "name": null, 00:17:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.866 "is_configured": false, 00:17:02.866 "data_offset": 0, 00:17:02.866 "data_size": 63488 00:17:02.866 }, 00:17:02.866 { 00:17:02.866 "name": "BaseBdev2", 00:17:02.866 "uuid": "c7956fd0-3dbb-5751-9201-6903cee40d1d", 00:17:02.866 "is_configured": true, 00:17:02.866 "data_offset": 2048, 00:17:02.866 "data_size": 63488 00:17:02.866 } 00:17:02.866 ] 00:17:02.866 }' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76120 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76120 ']' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76120 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76120 00:17:02.866 killing process with pid 76120 00:17:02.866 Received shutdown signal, test time was about 60.000000 seconds 00:17:02.866 00:17:02.866 Latency(us) 00:17:02.866 [2024-11-27T14:16:33.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.866 [2024-11-27T14:16:33.379Z] =================================================================================================================== 00:17:02.866 [2024-11-27T14:16:33.379Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76120' 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76120 00:17:02.866 [2024-11-27 14:16:33.347695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.866 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76120 00:17:02.866 [2024-11-27 14:16:33.347880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.866 [2024-11-27 14:16:33.347953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.866 [2024-11-27 14:16:33.347974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:03.125 [2024-11-27 14:16:33.617740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.501 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:04.501 00:17:04.501 real 0m27.185s 00:17:04.501 user 0m33.220s 00:17:04.501 sys 0m4.442s 00:17:04.501 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.501 ************************************ 00:17:04.501 END TEST raid_rebuild_test_sb 00:17:04.501 ************************************ 00:17:04.501 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.501 14:16:34 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:17:04.501 14:16:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:04.501 14:16:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.501 14:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.501 ************************************ 00:17:04.501 START TEST raid_rebuild_test_io 00:17:04.501 ************************************ 00:17:04.501 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:17:04.501 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76883 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76883 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76883 ']' 00:17:04.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.502 14:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.502 [2024-11-27 14:16:34.848836] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:17:04.502 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:04.502 Zero copy mechanism will not be used. 00:17:04.502 [2024-11-27 14:16:34.850208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76883 ] 00:17:04.760 [2024-11-27 14:16:35.036900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.760 [2024-11-27 14:16:35.168697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.018 [2024-11-27 14:16:35.374004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.018 [2024-11-27 14:16:35.374325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.276 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.276 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:05.276 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.276 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.276 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.276 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 BaseBdev1_malloc 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 [2024-11-27 14:16:35.800424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:05.545 [2024-11-27 14:16:35.800514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.545 [2024-11-27 14:16:35.800548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.545 [2024-11-27 14:16:35.800567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.545 [2024-11-27 14:16:35.803436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.545 [2024-11-27 14:16:35.803642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.545 BaseBdev1 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 BaseBdev2_malloc 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 [2024-11-27 14:16:35.853255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:05.545 [2024-11-27 14:16:35.853336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.545 [2024-11-27 14:16:35.853371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.545 [2024-11-27 14:16:35.853389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.545 [2024-11-27 14:16:35.856171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.545 [2024-11-27 14:16:35.856221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:05.545 BaseBdev2 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 spare_malloc 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 spare_delay 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.545 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.545 [2024-11-27 14:16:35.921716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.545 [2024-11-27 14:16:35.921795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.545 [2024-11-27 14:16:35.921845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:05.545 [2024-11-27 14:16:35.921867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.546 [2024-11-27 14:16:35.924707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.546 [2024-11-27 14:16:35.924761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.546 spare 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.546 [2024-11-27 14:16:35.929785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.546 [2024-11-27 14:16:35.932236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.546 [2024-11-27 14:16:35.932360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.546 [2024-11-27 14:16:35.932384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:05.546 [2024-11-27 14:16:35.932703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:05.546 [2024-11-27 14:16:35.932972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.546 [2024-11-27 14:16:35.932994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.546 [2024-11-27 14:16:35.933188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.546 "name": "raid_bdev1", 00:17:05.546 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:05.546 "strip_size_kb": 0, 00:17:05.546 "state": "online", 00:17:05.546 "raid_level": "raid1", 00:17:05.546 "superblock": false, 00:17:05.546 "num_base_bdevs": 2, 00:17:05.546 "num_base_bdevs_discovered": 2, 00:17:05.546 "num_base_bdevs_operational": 2, 00:17:05.546 "base_bdevs_list": [ 00:17:05.546 { 00:17:05.546 "name": "BaseBdev1", 00:17:05.546 "uuid": "df63779c-b114-52ef-9584-1d0a792ca157", 00:17:05.546 "is_configured": true, 00:17:05.546 "data_offset": 0, 00:17:05.546 "data_size": 65536 00:17:05.546 }, 00:17:05.546 { 00:17:05.546 "name": "BaseBdev2", 00:17:05.546 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:05.546 "is_configured": true, 00:17:05.546 "data_offset": 0, 00:17:05.546 "data_size": 65536 00:17:05.546 } 00:17:05.546 ] 00:17:05.546 }' 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.546 14:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:06.113 [2024-11-27 14:16:36.466315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.113 [2024-11-27 14:16:36.557935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.113 "name": "raid_bdev1", 00:17:06.113 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:06.113 "strip_size_kb": 0, 00:17:06.113 "state": "online", 00:17:06.113 "raid_level": "raid1", 00:17:06.113 "superblock": false, 00:17:06.113 "num_base_bdevs": 2, 00:17:06.113 "num_base_bdevs_discovered": 1, 00:17:06.113 "num_base_bdevs_operational": 1, 00:17:06.113 "base_bdevs_list": [ 00:17:06.113 { 00:17:06.113 "name": null, 00:17:06.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.113 "is_configured": false, 00:17:06.113 "data_offset": 0, 00:17:06.113 "data_size": 65536 00:17:06.113 }, 00:17:06.113 { 00:17:06.113 "name": "BaseBdev2", 00:17:06.113 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:06.113 "is_configured": true, 00:17:06.113 "data_offset": 0, 00:17:06.113 "data_size": 65536 00:17:06.113 } 00:17:06.113 ] 00:17:06.113 }' 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.113 14:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.372 [2024-11-27 14:16:36.686102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:06.372 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.372 Zero copy mechanism will not be used. 00:17:06.372 Running I/O for 60 seconds... 00:17:06.631 14:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.631 14:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.631 14:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.631 [2024-11-27 14:16:37.070374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.631 14:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.631 14:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.889 [2024-11-27 14:16:37.147904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:06.889 [2024-11-27 14:16:37.150429] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.889 [2024-11-27 14:16:37.283762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:07.148 [2024-11-27 14:16:37.487852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:07.148 [2024-11-27 14:16:37.488203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:07.405 156.00 IOPS, 468.00 MiB/s [2024-11-27T14:16:37.918Z] [2024-11-27 14:16:37.803784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:07.664 [2024-11-27 14:16:38.006364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:07.664 [2024-11-27 14:16:38.006718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.664 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.923 "name": "raid_bdev1", 00:17:07.923 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:07.923 "strip_size_kb": 0, 00:17:07.923 "state": "online", 00:17:07.923 "raid_level": "raid1", 00:17:07.923 "superblock": false, 00:17:07.923 "num_base_bdevs": 2, 00:17:07.923 "num_base_bdevs_discovered": 2, 00:17:07.923 "num_base_bdevs_operational": 2, 00:17:07.923 "process": { 00:17:07.923 "type": "rebuild", 00:17:07.923 "target": "spare", 00:17:07.923 "progress": { 00:17:07.923 "blocks": 12288, 00:17:07.923 "percent": 18 00:17:07.923 } 00:17:07.923 }, 00:17:07.923 "base_bdevs_list": [ 00:17:07.923 { 00:17:07.923 "name": "spare", 00:17:07.923 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:07.923 "is_configured": true, 00:17:07.923 "data_offset": 0, 00:17:07.923 "data_size": 65536 00:17:07.923 }, 00:17:07.923 { 00:17:07.923 "name": "BaseBdev2", 00:17:07.923 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:07.923 "is_configured": true, 00:17:07.923 "data_offset": 0, 00:17:07.923 "data_size": 65536 00:17:07.923 } 00:17:07.923 ] 00:17:07.923 }' 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.923 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.923 [2024-11-27 14:16:38.309646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.923 [2024-11-27 14:16:38.381529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:07.923 [2024-11-27 14:16:38.381884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:08.182 [2024-11-27 14:16:38.483712] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.182 [2024-11-27 14:16:38.494706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.182 [2024-11-27 14:16:38.494790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.182 [2024-11-27 14:16:38.494817] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.182 [2024-11-27 14:16:38.546628] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.182 "name": "raid_bdev1", 00:17:08.182 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:08.182 "strip_size_kb": 0, 00:17:08.182 "state": "online", 00:17:08.182 "raid_level": "raid1", 00:17:08.182 "superblock": false, 00:17:08.182 "num_base_bdevs": 2, 00:17:08.182 "num_base_bdevs_discovered": 1, 00:17:08.182 "num_base_bdevs_operational": 1, 00:17:08.182 "base_bdevs_list": [ 00:17:08.182 { 00:17:08.182 "name": null, 00:17:08.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.182 "is_configured": false, 00:17:08.182 "data_offset": 0, 00:17:08.182 "data_size": 65536 00:17:08.182 }, 00:17:08.182 { 00:17:08.182 "name": "BaseBdev2", 00:17:08.182 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:08.182 "is_configured": true, 00:17:08.182 "data_offset": 0, 00:17:08.182 "data_size": 65536 00:17:08.182 } 00:17:08.182 ] 00:17:08.182 }' 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.182 14:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.761 126.00 IOPS, 378.00 MiB/s [2024-11-27T14:16:39.274Z] 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.761 14:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.762 "name": "raid_bdev1", 00:17:08.762 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:08.762 "strip_size_kb": 0, 00:17:08.762 "state": "online", 00:17:08.762 "raid_level": "raid1", 00:17:08.762 "superblock": false, 00:17:08.762 "num_base_bdevs": 2, 00:17:08.762 "num_base_bdevs_discovered": 1, 00:17:08.762 "num_base_bdevs_operational": 1, 00:17:08.762 "base_bdevs_list": [ 00:17:08.762 { 00:17:08.762 "name": null, 00:17:08.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.762 "is_configured": false, 00:17:08.762 "data_offset": 0, 00:17:08.762 "data_size": 65536 00:17:08.762 }, 00:17:08.762 { 00:17:08.762 "name": "BaseBdev2", 00:17:08.762 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:08.762 "is_configured": true, 00:17:08.762 "data_offset": 0, 00:17:08.762 "data_size": 65536 00:17:08.762 } 00:17:08.762 ] 00:17:08.762 }' 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.762 [2024-11-27 14:16:39.216405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.762 14:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:09.019 [2024-11-27 14:16:39.308891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:09.019 [2024-11-27 14:16:39.311711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.019 [2024-11-27 14:16:39.422142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:09.019 [2024-11-27 14:16:39.422645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:09.278 [2024-11-27 14:16:39.632684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:09.278 [2024-11-27 14:16:39.633117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:09.537 132.33 IOPS, 397.00 MiB/s [2024-11-27T14:16:40.050Z] [2024-11-27 14:16:39.969077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.795 14:16:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.054 "name": "raid_bdev1", 00:17:10.054 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:10.054 "strip_size_kb": 0, 00:17:10.054 "state": "online", 00:17:10.054 "raid_level": "raid1", 00:17:10.054 "superblock": false, 00:17:10.054 "num_base_bdevs": 2, 00:17:10.054 "num_base_bdevs_discovered": 2, 00:17:10.054 "num_base_bdevs_operational": 2, 00:17:10.054 "process": { 00:17:10.054 "type": "rebuild", 00:17:10.054 "target": "spare", 00:17:10.054 "progress": { 00:17:10.054 "blocks": 12288, 00:17:10.054 "percent": 18 00:17:10.054 } 00:17:10.054 }, 00:17:10.054 "base_bdevs_list": [ 00:17:10.054 { 00:17:10.054 "name": "spare", 00:17:10.054 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:10.054 "is_configured": true, 00:17:10.054 "data_offset": 0, 00:17:10.054 "data_size": 65536 00:17:10.054 }, 00:17:10.054 { 00:17:10.054 "name": "BaseBdev2", 00:17:10.054 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:10.054 "is_configured": true, 00:17:10.054 "data_offset": 0, 00:17:10.054 "data_size": 65536 00:17:10.054 } 00:17:10.054 ] 00:17:10.054 }' 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.054 [2024-11-27 14:16:40.345750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.054 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.054 "name": "raid_bdev1", 00:17:10.054 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:10.054 "strip_size_kb": 0, 00:17:10.054 "state": "online", 00:17:10.054 "raid_level": "raid1", 00:17:10.054 "superblock": false, 00:17:10.054 "num_base_bdevs": 2, 00:17:10.054 "num_base_bdevs_discovered": 2, 00:17:10.054 "num_base_bdevs_operational": 2, 00:17:10.054 "process": { 00:17:10.054 "type": "rebuild", 00:17:10.054 "target": "spare", 00:17:10.054 "progress": { 00:17:10.054 "blocks": 14336, 00:17:10.054 "percent": 21 00:17:10.054 } 00:17:10.054 }, 00:17:10.054 "base_bdevs_list": [ 00:17:10.054 { 00:17:10.054 "name": "spare", 00:17:10.054 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:10.054 "is_configured": true, 00:17:10.054 "data_offset": 0, 00:17:10.054 "data_size": 65536 00:17:10.054 }, 00:17:10.054 { 00:17:10.054 "name": "BaseBdev2", 00:17:10.054 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:10.054 "is_configured": true, 00:17:10.054 "data_offset": 0, 00:17:10.054 "data_size": 65536 00:17:10.054 } 00:17:10.054 ] 00:17:10.054 }' 00:17:10.055 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.055 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.055 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.312 [2024-11-27 14:16:40.581979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:10.312 [2024-11-27 14:16:40.582384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:10.312 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.312 14:16:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.571 121.25 IOPS, 363.75 MiB/s [2024-11-27T14:16:41.084Z] [2024-11-27 14:16:40.923059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:10.571 [2024-11-27 14:16:41.058528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:11.136 [2024-11-27 14:16:41.428566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.137 14:16:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.395 [2024-11-27 14:16:41.648335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:11.395 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.395 "name": "raid_bdev1", 00:17:11.395 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:11.395 "strip_size_kb": 0, 00:17:11.395 "state": "online", 00:17:11.395 "raid_level": "raid1", 00:17:11.395 "superblock": false, 00:17:11.395 "num_base_bdevs": 2, 00:17:11.395 "num_base_bdevs_discovered": 2, 00:17:11.395 "num_base_bdevs_operational": 2, 00:17:11.395 "process": { 00:17:11.395 "type": "rebuild", 00:17:11.395 "target": "spare", 00:17:11.395 "progress": { 00:17:11.395 "blocks": 26624, 00:17:11.395 "percent": 40 00:17:11.395 } 00:17:11.395 }, 00:17:11.395 "base_bdevs_list": [ 00:17:11.395 { 00:17:11.395 "name": "spare", 00:17:11.395 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:11.395 "is_configured": true, 00:17:11.395 "data_offset": 0, 00:17:11.395 "data_size": 65536 00:17:11.395 }, 00:17:11.395 { 00:17:11.395 "name": "BaseBdev2", 00:17:11.395 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:11.395 "is_configured": true, 00:17:11.395 "data_offset": 0, 00:17:11.395 "data_size": 65536 00:17:11.395 } 00:17:11.395 ] 00:17:11.395 }' 00:17:11.395 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.395 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.395 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.395 108.80 IOPS, 326.40 MiB/s [2024-11-27T14:16:41.908Z] 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.395 14:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.654 [2024-11-27 14:16:41.986687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:11.655 [2024-11-27 14:16:42.098426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:11.655 [2024-11-27 14:16:42.098853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:11.913 [2024-11-27 14:16:42.410341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:12.479 98.00 IOPS, 294.00 MiB/s [2024-11-27T14:16:42.992Z] [2024-11-27 14:16:42.722611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.479 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.479 "name": "raid_bdev1", 00:17:12.479 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:12.479 "strip_size_kb": 0, 00:17:12.479 "state": "online", 00:17:12.479 "raid_level": "raid1", 00:17:12.479 "superblock": false, 00:17:12.479 "num_base_bdevs": 2, 00:17:12.479 "num_base_bdevs_discovered": 2, 00:17:12.479 "num_base_bdevs_operational": 2, 00:17:12.479 "process": { 00:17:12.479 "type": "rebuild", 00:17:12.479 "target": "spare", 00:17:12.479 "progress": { 00:17:12.479 "blocks": 45056, 00:17:12.479 "percent": 68 00:17:12.479 } 00:17:12.479 }, 00:17:12.479 "base_bdevs_list": [ 00:17:12.479 { 00:17:12.480 "name": "spare", 00:17:12.480 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:12.480 "is_configured": true, 00:17:12.480 "data_offset": 0, 00:17:12.480 "data_size": 65536 00:17:12.480 }, 00:17:12.480 { 00:17:12.480 "name": "BaseBdev2", 00:17:12.480 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:12.480 "is_configured": true, 00:17:12.480 "data_offset": 0, 00:17:12.480 "data_size": 65536 00:17:12.480 } 00:17:12.480 ] 00:17:12.480 }' 00:17:12.480 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.480 [2024-11-27 14:16:42.839902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:12.480 [2024-11-27 14:16:42.840162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:12.480 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.480 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.480 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.480 14:16:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.417 87.86 IOPS, 263.57 MiB/s [2024-11-27T14:16:43.930Z] 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.417 14:16:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.683 [2024-11-27 14:16:43.949212] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:13.683 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.683 "name": "raid_bdev1", 00:17:13.683 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:13.683 "strip_size_kb": 0, 00:17:13.683 "state": "online", 00:17:13.683 "raid_level": "raid1", 00:17:13.683 "superblock": false, 00:17:13.683 "num_base_bdevs": 2, 00:17:13.683 "num_base_bdevs_discovered": 2, 00:17:13.683 "num_base_bdevs_operational": 2, 00:17:13.683 "process": { 00:17:13.683 "type": "rebuild", 00:17:13.683 "target": "spare", 00:17:13.683 "progress": { 00:17:13.683 "blocks": 63488, 00:17:13.683 "percent": 96 00:17:13.683 } 00:17:13.683 }, 00:17:13.683 "base_bdevs_list": [ 00:17:13.683 { 00:17:13.683 "name": "spare", 00:17:13.683 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:13.683 "is_configured": true, 00:17:13.683 "data_offset": 0, 00:17:13.683 "data_size": 65536 00:17:13.683 }, 00:17:13.683 { 00:17:13.683 "name": "BaseBdev2", 00:17:13.683 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:13.683 "is_configured": true, 00:17:13.683 "data_offset": 0, 00:17:13.683 "data_size": 65536 00:17:13.683 } 00:17:13.683 ] 00:17:13.683 }' 00:17:13.683 14:16:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.683 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.683 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.683 [2024-11-27 14:16:44.049211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:13.683 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.683 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.683 [2024-11-27 14:16:44.060057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.817 81.75 IOPS, 245.25 MiB/s [2024-11-27T14:16:45.330Z] 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.817 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.817 "name": "raid_bdev1", 00:17:14.817 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:14.817 "strip_size_kb": 0, 00:17:14.817 "state": "online", 00:17:14.818 "raid_level": "raid1", 00:17:14.818 "superblock": false, 00:17:14.818 "num_base_bdevs": 2, 00:17:14.818 "num_base_bdevs_discovered": 2, 00:17:14.818 "num_base_bdevs_operational": 2, 00:17:14.818 "base_bdevs_list": [ 00:17:14.818 { 00:17:14.818 "name": "spare", 00:17:14.818 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:14.818 "is_configured": true, 00:17:14.818 "data_offset": 0, 00:17:14.818 "data_size": 65536 00:17:14.818 }, 00:17:14.818 { 00:17:14.818 "name": "BaseBdev2", 00:17:14.818 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:14.818 "is_configured": true, 00:17:14.818 "data_offset": 0, 00:17:14.818 "data_size": 65536 00:17:14.818 } 00:17:14.818 ] 00:17:14.818 }' 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.818 "name": "raid_bdev1", 00:17:14.818 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:14.818 "strip_size_kb": 0, 00:17:14.818 "state": "online", 00:17:14.818 "raid_level": "raid1", 00:17:14.818 "superblock": false, 00:17:14.818 "num_base_bdevs": 2, 00:17:14.818 "num_base_bdevs_discovered": 2, 00:17:14.818 "num_base_bdevs_operational": 2, 00:17:14.818 "base_bdevs_list": [ 00:17:14.818 { 00:17:14.818 "name": "spare", 00:17:14.818 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:14.818 "is_configured": true, 00:17:14.818 "data_offset": 0, 00:17:14.818 "data_size": 65536 00:17:14.818 }, 00:17:14.818 { 00:17:14.818 "name": "BaseBdev2", 00:17:14.818 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:14.818 "is_configured": true, 00:17:14.818 "data_offset": 0, 00:17:14.818 "data_size": 65536 00:17:14.818 } 00:17:14.818 ] 00:17:14.818 }' 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.818 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.076 "name": "raid_bdev1", 00:17:15.076 "uuid": "f668e36d-c398-4749-8fcb-dfff4473baea", 00:17:15.076 "strip_size_kb": 0, 00:17:15.076 "state": "online", 00:17:15.076 "raid_level": "raid1", 00:17:15.076 "superblock": false, 00:17:15.076 "num_base_bdevs": 2, 00:17:15.076 "num_base_bdevs_discovered": 2, 00:17:15.076 "num_base_bdevs_operational": 2, 00:17:15.076 "base_bdevs_list": [ 00:17:15.076 { 00:17:15.076 "name": "spare", 00:17:15.076 "uuid": "c30f64c0-056f-5d57-98db-d5ecac2582b8", 00:17:15.076 "is_configured": true, 00:17:15.076 "data_offset": 0, 00:17:15.076 "data_size": 65536 00:17:15.076 }, 00:17:15.076 { 00:17:15.076 "name": "BaseBdev2", 00:17:15.076 "uuid": "908d0693-0f10-52c7-a90a-0633d1160c92", 00:17:15.076 "is_configured": true, 00:17:15.076 "data_offset": 0, 00:17:15.076 "data_size": 65536 00:17:15.076 } 00:17:15.076 ] 00:17:15.076 }' 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.076 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.594 77.78 IOPS, 233.33 MiB/s [2024-11-27T14:16:46.107Z] 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.594 [2024-11-27 14:16:45.868872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.594 [2024-11-27 14:16:45.868912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.594 00:17:15.594 Latency(us) 00:17:15.594 [2024-11-27T14:16:46.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.594 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:15.594 raid_bdev1 : 9.27 76.61 229.82 0.00 0.00 17355.27 305.34 111053.73 00:17:15.594 [2024-11-27T14:16:46.107Z] =================================================================================================================== 00:17:15.594 [2024-11-27T14:16:46.107Z] Total : 76.61 229.82 0.00 0.00 17355.27 305.34 111053.73 00:17:15.594 { 00:17:15.594 "results": [ 00:17:15.594 { 00:17:15.594 "job": "raid_bdev1", 00:17:15.594 "core_mask": "0x1", 00:17:15.594 "workload": "randrw", 00:17:15.594 "percentage": 50, 00:17:15.594 "status": "finished", 00:17:15.594 "queue_depth": 2, 00:17:15.594 "io_size": 3145728, 00:17:15.594 "runtime": 9.268113, 00:17:15.594 "iops": 76.60674832082863, 00:17:15.594 "mibps": 229.8202449624859, 00:17:15.594 "io_failed": 0, 00:17:15.594 "io_timeout": 0, 00:17:15.594 "avg_latency_us": 17355.2712112676, 00:17:15.594 "min_latency_us": 305.3381818181818, 00:17:15.594 "max_latency_us": 111053.73090909091 00:17:15.594 } 00:17:15.594 ], 00:17:15.594 "core_count": 1 00:17:15.594 } 00:17:15.594 [2024-11-27 14:16:45.977129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.594 [2024-11-27 14:16:45.977228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.594 [2024-11-27 14:16:45.977344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.594 [2024-11-27 14:16:45.977366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:15.594 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.594 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.595 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:16.162 /dev/nbd0 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.162 1+0 records in 00:17:16.162 1+0 records out 00:17:16.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466832 s, 8.8 MB/s 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:16.162 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:16.421 /dev/nbd1 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.421 1+0 records in 00:17:16.421 1+0 records out 00:17:16.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673171 s, 6.1 MB/s 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:16.421 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.679 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.938 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.197 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76883 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76883 ']' 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76883 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76883 00:17:17.198 killing process with pid 76883 00:17:17.198 Received shutdown signal, test time was about 10.880171 seconds 00:17:17.198 00:17:17.198 Latency(us) 00:17:17.198 [2024-11-27T14:16:47.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.198 [2024-11-27T14:16:47.711Z] =================================================================================================================== 00:17:17.198 [2024-11-27T14:16:47.711Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76883' 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76883 00:17:17.198 [2024-11-27 14:16:47.568870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.198 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76883 00:17:17.456 [2024-11-27 14:16:47.791073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.835 ************************************ 00:17:18.835 END TEST raid_rebuild_test_io 00:17:18.835 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:18.835 00:17:18.835 real 0m14.228s 00:17:18.835 user 0m18.260s 00:17:18.835 sys 0m1.503s 00:17:18.835 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.835 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.835 ************************************ 00:17:18.835 14:16:49 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:17:18.835 14:16:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:18.835 14:16:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.835 14:16:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.835 ************************************ 00:17:18.835 START TEST raid_rebuild_test_sb_io 00:17:18.835 ************************************ 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:18.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77284 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77284 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77284 ']' 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.835 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:18.836 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.836 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.836 14:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.836 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:18.836 Zero copy mechanism will not be used. 00:17:18.836 [2024-11-27 14:16:49.114728] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:17:18.836 [2024-11-27 14:16:49.114906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77284 ] 00:17:18.836 [2024-11-27 14:16:49.290508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.094 [2024-11-27 14:16:49.427003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.354 [2024-11-27 14:16:49.639816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.354 [2024-11-27 14:16:49.639917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.612 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.612 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:19.612 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:19.612 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:19.612 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.612 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.871 BaseBdev1_malloc 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.871 [2024-11-27 14:16:50.140839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:19.871 [2024-11-27 14:16:50.141098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.871 [2024-11-27 14:16:50.141150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:19.871 [2024-11-27 14:16:50.141183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.871 [2024-11-27 14:16:50.144447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.871 [2024-11-27 14:16:50.144691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:19.871 BaseBdev1 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.871 BaseBdev2_malloc 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.871 [2024-11-27 14:16:50.196618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:19.871 [2024-11-27 14:16:50.196708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.871 [2024-11-27 14:16:50.196740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:19.871 [2024-11-27 14:16:50.196757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.871 [2024-11-27 14:16:50.199988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.871 [2024-11-27 14:16:50.200248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:19.871 BaseBdev2 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.871 spare_malloc 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:19.871 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.872 spare_delay 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.872 [2024-11-27 14:16:50.267807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:19.872 [2024-11-27 14:16:50.267941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.872 [2024-11-27 14:16:50.267977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:19.872 [2024-11-27 14:16:50.268021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.872 [2024-11-27 14:16:50.271142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.872 [2024-11-27 14:16:50.271210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:19.872 spare 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.872 [2024-11-27 14:16:50.276122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.872 [2024-11-27 14:16:50.278637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.872 [2024-11-27 14:16:50.278958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:19.872 [2024-11-27 14:16:50.278984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.872 [2024-11-27 14:16:50.279351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:19.872 [2024-11-27 14:16:50.279632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:19.872 [2024-11-27 14:16:50.279662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:19.872 [2024-11-27 14:16:50.279918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.872 "name": "raid_bdev1", 00:17:19.872 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:19.872 "strip_size_kb": 0, 00:17:19.872 "state": "online", 00:17:19.872 "raid_level": "raid1", 00:17:19.872 "superblock": true, 00:17:19.872 "num_base_bdevs": 2, 00:17:19.872 "num_base_bdevs_discovered": 2, 00:17:19.872 "num_base_bdevs_operational": 2, 00:17:19.872 "base_bdevs_list": [ 00:17:19.872 { 00:17:19.872 "name": "BaseBdev1", 00:17:19.872 "uuid": "da7e2973-8abb-5971-992a-8095a343d9ba", 00:17:19.872 "is_configured": true, 00:17:19.872 "data_offset": 2048, 00:17:19.872 "data_size": 63488 00:17:19.872 }, 00:17:19.872 { 00:17:19.872 "name": "BaseBdev2", 00:17:19.872 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:19.872 "is_configured": true, 00:17:19.872 "data_offset": 2048, 00:17:19.872 "data_size": 63488 00:17:19.872 } 00:17:19.872 ] 00:17:19.872 }' 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.872 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.439 [2024-11-27 14:16:50.808668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:20.439 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.440 [2024-11-27 14:16:50.908293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.440 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.698 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.698 "name": "raid_bdev1", 00:17:20.698 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:20.698 "strip_size_kb": 0, 00:17:20.698 "state": "online", 00:17:20.698 "raid_level": "raid1", 00:17:20.698 "superblock": true, 00:17:20.698 "num_base_bdevs": 2, 00:17:20.698 "num_base_bdevs_discovered": 1, 00:17:20.698 "num_base_bdevs_operational": 1, 00:17:20.698 "base_bdevs_list": [ 00:17:20.698 { 00:17:20.698 "name": null, 00:17:20.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.698 "is_configured": false, 00:17:20.698 "data_offset": 0, 00:17:20.698 "data_size": 63488 00:17:20.698 }, 00:17:20.698 { 00:17:20.698 "name": "BaseBdev2", 00:17:20.698 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:20.698 "is_configured": true, 00:17:20.698 "data_offset": 2048, 00:17:20.698 "data_size": 63488 00:17:20.698 } 00:17:20.698 ] 00:17:20.698 }' 00:17:20.698 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.698 14:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.698 [2024-11-27 14:16:51.032607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:20.698 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:20.698 Zero copy mechanism will not be used. 00:17:20.698 Running I/O for 60 seconds... 00:17:20.957 14:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:20.957 14:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.957 14:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.957 [2024-11-27 14:16:51.397057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.957 14:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.957 14:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:21.215 [2024-11-27 14:16:51.499276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:21.215 [2024-11-27 14:16:51.502365] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:21.215 [2024-11-27 14:16:51.613734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:21.215 [2024-11-27 14:16:51.614804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:21.474 [2024-11-27 14:16:51.836909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:21.474 [2024-11-27 14:16:51.837586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:21.732 153.00 IOPS, 459.00 MiB/s [2024-11-27T14:16:52.245Z] [2024-11-27 14:16:52.164734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.991 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.250 "name": "raid_bdev1", 00:17:22.250 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:22.250 "strip_size_kb": 0, 00:17:22.250 "state": "online", 00:17:22.250 "raid_level": "raid1", 00:17:22.250 "superblock": true, 00:17:22.250 "num_base_bdevs": 2, 00:17:22.250 "num_base_bdevs_discovered": 2, 00:17:22.250 "num_base_bdevs_operational": 2, 00:17:22.250 "process": { 00:17:22.250 "type": "rebuild", 00:17:22.250 "target": "spare", 00:17:22.250 "progress": { 00:17:22.250 "blocks": 12288, 00:17:22.250 "percent": 19 00:17:22.250 } 00:17:22.250 }, 00:17:22.250 "base_bdevs_list": [ 00:17:22.250 { 00:17:22.250 "name": "spare", 00:17:22.250 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:22.250 "is_configured": true, 00:17:22.250 "data_offset": 2048, 00:17:22.250 "data_size": 63488 00:17:22.250 }, 00:17:22.250 { 00:17:22.250 "name": "BaseBdev2", 00:17:22.250 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:22.250 "is_configured": true, 00:17:22.250 "data_offset": 2048, 00:17:22.250 "data_size": 63488 00:17:22.250 } 00:17:22.250 ] 00:17:22.250 }' 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.250 [2024-11-27 14:16:52.545240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.250 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.250 [2024-11-27 14:16:52.635788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.250 [2024-11-27 14:16:52.673874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:22.250 [2024-11-27 14:16:52.674295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:22.250 [2024-11-27 14:16:52.693724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.250 [2024-11-27 14:16:52.704669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.250 [2024-11-27 14:16:52.704859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.250 [2024-11-27 14:16:52.704891] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.250 [2024-11-27 14:16:52.752896] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.509 "name": "raid_bdev1", 00:17:22.509 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:22.509 "strip_size_kb": 0, 00:17:22.509 "state": "online", 00:17:22.509 "raid_level": "raid1", 00:17:22.509 "superblock": true, 00:17:22.509 "num_base_bdevs": 2, 00:17:22.509 "num_base_bdevs_discovered": 1, 00:17:22.509 "num_base_bdevs_operational": 1, 00:17:22.509 "base_bdevs_list": [ 00:17:22.509 { 00:17:22.509 "name": null, 00:17:22.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.509 "is_configured": false, 00:17:22.509 "data_offset": 0, 00:17:22.509 "data_size": 63488 00:17:22.509 }, 00:17:22.509 { 00:17:22.509 "name": "BaseBdev2", 00:17:22.509 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:22.509 "is_configured": true, 00:17:22.509 "data_offset": 2048, 00:17:22.509 "data_size": 63488 00:17:22.509 } 00:17:22.509 ] 00:17:22.509 }' 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.509 14:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.767 150.00 IOPS, 450.00 MiB/s [2024-11-27T14:16:53.280Z] 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.026 "name": "raid_bdev1", 00:17:23.026 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:23.026 "strip_size_kb": 0, 00:17:23.026 "state": "online", 00:17:23.026 "raid_level": "raid1", 00:17:23.026 "superblock": true, 00:17:23.026 "num_base_bdevs": 2, 00:17:23.026 "num_base_bdevs_discovered": 1, 00:17:23.026 "num_base_bdevs_operational": 1, 00:17:23.026 "base_bdevs_list": [ 00:17:23.026 { 00:17:23.026 "name": null, 00:17:23.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.026 "is_configured": false, 00:17:23.026 "data_offset": 0, 00:17:23.026 "data_size": 63488 00:17:23.026 }, 00:17:23.026 { 00:17:23.026 "name": "BaseBdev2", 00:17:23.026 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:23.026 "is_configured": true, 00:17:23.026 "data_offset": 2048, 00:17:23.026 "data_size": 63488 00:17:23.026 } 00:17:23.026 ] 00:17:23.026 }' 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.026 [2024-11-27 14:16:53.449045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.026 14:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:23.026 [2024-11-27 14:16:53.520539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:23.026 [2024-11-27 14:16:53.523533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.284 [2024-11-27 14:16:53.670206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:23.284 [2024-11-27 14:16:53.671198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:23.543 [2024-11-27 14:16:53.901477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:23.543 [2024-11-27 14:16:53.902047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:24.109 148.00 IOPS, 444.00 MiB/s [2024-11-27T14:16:54.622Z] [2024-11-27 14:16:54.358387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:24.109 [2024-11-27 14:16:54.358998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.110 "name": "raid_bdev1", 00:17:24.110 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:24.110 "strip_size_kb": 0, 00:17:24.110 "state": "online", 00:17:24.110 "raid_level": "raid1", 00:17:24.110 "superblock": true, 00:17:24.110 "num_base_bdevs": 2, 00:17:24.110 "num_base_bdevs_discovered": 2, 00:17:24.110 "num_base_bdevs_operational": 2, 00:17:24.110 "process": { 00:17:24.110 "type": "rebuild", 00:17:24.110 "target": "spare", 00:17:24.110 "progress": { 00:17:24.110 "blocks": 10240, 00:17:24.110 "percent": 16 00:17:24.110 } 00:17:24.110 }, 00:17:24.110 "base_bdevs_list": [ 00:17:24.110 { 00:17:24.110 "name": "spare", 00:17:24.110 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:24.110 "is_configured": true, 00:17:24.110 "data_offset": 2048, 00:17:24.110 "data_size": 63488 00:17:24.110 }, 00:17:24.110 { 00:17:24.110 "name": "BaseBdev2", 00:17:24.110 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:24.110 "is_configured": true, 00:17:24.110 "data_offset": 2048, 00:17:24.110 "data_size": 63488 00:17:24.110 } 00:17:24.110 ] 00:17:24.110 }' 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.110 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:24.368 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=456 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.368 [2024-11-27 14:16:54.708879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.368 "name": "raid_bdev1", 00:17:24.368 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:24.368 "strip_size_kb": 0, 00:17:24.368 "state": "online", 00:17:24.368 "raid_level": "raid1", 00:17:24.368 "superblock": true, 00:17:24.368 "num_base_bdevs": 2, 00:17:24.368 "num_base_bdevs_discovered": 2, 00:17:24.368 "num_base_bdevs_operational": 2, 00:17:24.368 "process": { 00:17:24.368 "type": "rebuild", 00:17:24.368 "target": "spare", 00:17:24.368 "progress": { 00:17:24.368 "blocks": 12288, 00:17:24.368 "percent": 19 00:17:24.368 } 00:17:24.368 }, 00:17:24.368 "base_bdevs_list": [ 00:17:24.368 { 00:17:24.368 "name": "spare", 00:17:24.368 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:24.368 "is_configured": true, 00:17:24.368 "data_offset": 2048, 00:17:24.368 "data_size": 63488 00:17:24.368 }, 00:17:24.368 { 00:17:24.368 "name": "BaseBdev2", 00:17:24.368 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:24.368 "is_configured": true, 00:17:24.368 "data_offset": 2048, 00:17:24.368 "data_size": 63488 00:17:24.368 } 00:17:24.368 ] 00:17:24.368 }' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.368 14:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.368 [2024-11-27 14:16:54.845314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:24.885 132.50 IOPS, 397.50 MiB/s [2024-11-27T14:16:55.398Z] [2024-11-27 14:16:55.177640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:24.885 [2024-11-27 14:16:55.178592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:24.885 [2024-11-27 14:16:55.383982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:24.885 [2024-11-27 14:16:55.384664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:25.452 [2024-11-27 14:16:55.719198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.452 "name": "raid_bdev1", 00:17:25.452 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:25.452 "strip_size_kb": 0, 00:17:25.452 "state": "online", 00:17:25.452 "raid_level": "raid1", 00:17:25.452 "superblock": true, 00:17:25.452 "num_base_bdevs": 2, 00:17:25.452 "num_base_bdevs_discovered": 2, 00:17:25.452 "num_base_bdevs_operational": 2, 00:17:25.452 "process": { 00:17:25.452 "type": "rebuild", 00:17:25.452 "target": "spare", 00:17:25.452 "progress": { 00:17:25.452 "blocks": 26624, 00:17:25.452 "percent": 41 00:17:25.452 } 00:17:25.452 }, 00:17:25.452 "base_bdevs_list": [ 00:17:25.452 { 00:17:25.452 "name": "spare", 00:17:25.452 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:25.452 "is_configured": true, 00:17:25.452 "data_offset": 2048, 00:17:25.452 "data_size": 63488 00:17:25.452 }, 00:17:25.452 { 00:17:25.452 "name": "BaseBdev2", 00:17:25.452 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:25.452 "is_configured": true, 00:17:25.452 "data_offset": 2048, 00:17:25.452 "data_size": 63488 00:17:25.452 } 00:17:25.452 ] 00:17:25.452 }' 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.452 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.453 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.453 [2024-11-27 14:16:55.934077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:25.711 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.711 14:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.969 120.00 IOPS, 360.00 MiB/s [2024-11-27T14:16:56.482Z] [2024-11-27 14:16:56.455918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:26.535 [2024-11-27 14:16:56.779154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:26.535 [2024-11-27 14:16:56.779902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.535 14:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.535 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.535 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.535 "name": "raid_bdev1", 00:17:26.535 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:26.535 "strip_size_kb": 0, 00:17:26.535 "state": "online", 00:17:26.535 "raid_level": "raid1", 00:17:26.535 "superblock": true, 00:17:26.535 "num_base_bdevs": 2, 00:17:26.535 "num_base_bdevs_discovered": 2, 00:17:26.535 "num_base_bdevs_operational": 2, 00:17:26.535 "process": { 00:17:26.535 "type": "rebuild", 00:17:26.535 "target": "spare", 00:17:26.535 "progress": { 00:17:26.535 "blocks": 40960, 00:17:26.535 "percent": 64 00:17:26.535 } 00:17:26.535 }, 00:17:26.535 "base_bdevs_list": [ 00:17:26.535 { 00:17:26.535 "name": "spare", 00:17:26.535 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:26.535 "is_configured": true, 00:17:26.535 "data_offset": 2048, 00:17:26.535 "data_size": 63488 00:17:26.535 }, 00:17:26.535 { 00:17:26.535 "name": "BaseBdev2", 00:17:26.535 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:26.535 "is_configured": true, 00:17:26.535 "data_offset": 2048, 00:17:26.535 "data_size": 63488 00:17:26.535 } 00:17:26.535 ] 00:17:26.535 }' 00:17:26.535 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.793 107.83 IOPS, 323.50 MiB/s [2024-11-27T14:16:57.306Z] 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.793 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.793 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.793 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.358 [2024-11-27 14:16:57.597290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:27.616 [2024-11-27 14:16:57.930851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:27.874 97.71 IOPS, 293.14 MiB/s [2024-11-27T14:16:58.387Z] 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.874 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.874 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.874 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.874 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.875 "name": "raid_bdev1", 00:17:27.875 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:27.875 "strip_size_kb": 0, 00:17:27.875 "state": "online", 00:17:27.875 "raid_level": "raid1", 00:17:27.875 "superblock": true, 00:17:27.875 "num_base_bdevs": 2, 00:17:27.875 "num_base_bdevs_discovered": 2, 00:17:27.875 "num_base_bdevs_operational": 2, 00:17:27.875 "process": { 00:17:27.875 "type": "rebuild", 00:17:27.875 "target": "spare", 00:17:27.875 "progress": { 00:17:27.875 "blocks": 57344, 00:17:27.875 "percent": 90 00:17:27.875 } 00:17:27.875 }, 00:17:27.875 "base_bdevs_list": [ 00:17:27.875 { 00:17:27.875 "name": "spare", 00:17:27.875 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:27.875 "is_configured": true, 00:17:27.875 "data_offset": 2048, 00:17:27.875 "data_size": 63488 00:17:27.875 }, 00:17:27.875 { 00:17:27.875 "name": "BaseBdev2", 00:17:27.875 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:27.875 "is_configured": true, 00:17:27.875 "data_offset": 2048, 00:17:27.875 "data_size": 63488 00:17:27.875 } 00:17:27.875 ] 00:17:27.875 }' 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.875 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.875 [2024-11-27 14:16:58.383797] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:28.133 [2024-11-27 14:16:58.491349] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:28.133 [2024-11-27 14:16:58.493916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.960 89.88 IOPS, 269.62 MiB/s [2024-11-27T14:16:59.473Z] 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.960 "name": "raid_bdev1", 00:17:28.960 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:28.960 "strip_size_kb": 0, 00:17:28.960 "state": "online", 00:17:28.960 "raid_level": "raid1", 00:17:28.960 "superblock": true, 00:17:28.960 "num_base_bdevs": 2, 00:17:28.960 "num_base_bdevs_discovered": 2, 00:17:28.960 "num_base_bdevs_operational": 2, 00:17:28.960 "base_bdevs_list": [ 00:17:28.960 { 00:17:28.960 "name": "spare", 00:17:28.960 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:28.960 "is_configured": true, 00:17:28.960 "data_offset": 2048, 00:17:28.960 "data_size": 63488 00:17:28.960 }, 00:17:28.960 { 00:17:28.960 "name": "BaseBdev2", 00:17:28.960 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:28.960 "is_configured": true, 00:17:28.960 "data_offset": 2048, 00:17:28.960 "data_size": 63488 00:17:28.960 } 00:17:28.960 ] 00:17:28.960 }' 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:28.960 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.219 "name": "raid_bdev1", 00:17:29.219 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:29.219 "strip_size_kb": 0, 00:17:29.219 "state": "online", 00:17:29.219 "raid_level": "raid1", 00:17:29.219 "superblock": true, 00:17:29.219 "num_base_bdevs": 2, 00:17:29.219 "num_base_bdevs_discovered": 2, 00:17:29.219 "num_base_bdevs_operational": 2, 00:17:29.219 "base_bdevs_list": [ 00:17:29.219 { 00:17:29.219 "name": "spare", 00:17:29.219 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:29.219 "is_configured": true, 00:17:29.219 "data_offset": 2048, 00:17:29.219 "data_size": 63488 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "name": "BaseBdev2", 00:17:29.219 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:29.219 "is_configured": true, 00:17:29.219 "data_offset": 2048, 00:17:29.219 "data_size": 63488 00:17:29.219 } 00:17:29.219 ] 00:17:29.219 }' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.219 "name": "raid_bdev1", 00:17:29.219 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:29.219 "strip_size_kb": 0, 00:17:29.219 "state": "online", 00:17:29.219 "raid_level": "raid1", 00:17:29.219 "superblock": true, 00:17:29.219 "num_base_bdevs": 2, 00:17:29.219 "num_base_bdevs_discovered": 2, 00:17:29.219 "num_base_bdevs_operational": 2, 00:17:29.219 "base_bdevs_list": [ 00:17:29.219 { 00:17:29.219 "name": "spare", 00:17:29.219 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:29.219 "is_configured": true, 00:17:29.219 "data_offset": 2048, 00:17:29.219 "data_size": 63488 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "name": "BaseBdev2", 00:17:29.219 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:29.219 "is_configured": true, 00:17:29.219 "data_offset": 2048, 00:17:29.219 "data_size": 63488 00:17:29.219 } 00:17:29.219 ] 00:17:29.219 }' 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.219 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.787 83.22 IOPS, 249.67 MiB/s [2024-11-27T14:17:00.300Z] 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:29.787 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.787 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.787 [2024-11-27 14:17:00.199246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.787 [2024-11-27 14:17:00.199285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.787 00:17:29.787 Latency(us) 00:17:29.787 [2024-11-27T14:17:00.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.787 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:29.787 raid_bdev1 : 9.26 81.46 244.39 0.00 0.00 16302.24 277.41 112483.61 00:17:29.787 [2024-11-27T14:17:00.300Z] =================================================================================================================== 00:17:29.787 [2024-11-27T14:17:00.300Z] Total : 81.46 244.39 0.00 0.00 16302.24 277.41 112483.61 00:17:30.046 [2024-11-27 14:17:00.312699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.046 [2024-11-27 14:17:00.312816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.046 [2024-11-27 14:17:00.312954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.046 [2024-11-27 14:17:00.312976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:30.046 { 00:17:30.046 "results": [ 00:17:30.046 { 00:17:30.046 "job": "raid_bdev1", 00:17:30.046 "core_mask": "0x1", 00:17:30.046 "workload": "randrw", 00:17:30.046 "percentage": 50, 00:17:30.046 "status": "finished", 00:17:30.046 "queue_depth": 2, 00:17:30.046 "io_size": 3145728, 00:17:30.046 "runtime": 9.255857, 00:17:30.046 "iops": 81.46193269839843, 00:17:30.046 "mibps": 244.3857980951953, 00:17:30.046 "io_failed": 0, 00:17:30.046 "io_timeout": 0, 00:17:30.046 "avg_latency_us": 16302.241736194841, 00:17:30.046 "min_latency_us": 277.4109090909091, 00:17:30.046 "max_latency_us": 112483.60727272727 00:17:30.046 } 00:17:30.046 ], 00:17:30.046 "core_count": 1 00:17:30.046 } 00:17:30.046 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.046 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.046 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:30.046 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.046 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.046 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.047 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:30.324 /dev/nbd0 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.324 1+0 records in 00:17:30.324 1+0 records out 00:17:30.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360876 s, 11.4 MB/s 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.324 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:30.583 /dev/nbd1 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.583 1+0 records in 00:17:30.583 1+0 records out 00:17:30.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567997 s, 7.2 MB/s 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.583 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.841 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.101 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.360 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.361 [2024-11-27 14:17:01.812015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:31.361 [2024-11-27 14:17:01.812082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.361 [2024-11-27 14:17:01.812116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:31.361 [2024-11-27 14:17:01.812135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.361 [2024-11-27 14:17:01.815300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.361 [2024-11-27 14:17:01.815380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:31.361 [2024-11-27 14:17:01.815494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:31.361 [2024-11-27 14:17:01.815563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.361 [2024-11-27 14:17:01.815759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.361 spare 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.361 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.619 [2024-11-27 14:17:01.915946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:31.619 [2024-11-27 14:17:01.916031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:31.619 [2024-11-27 14:17:01.916519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:17:31.619 [2024-11-27 14:17:01.916840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:31.619 [2024-11-27 14:17:01.916863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:31.619 [2024-11-27 14:17:01.917439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.619 "name": "raid_bdev1", 00:17:31.619 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:31.619 "strip_size_kb": 0, 00:17:31.619 "state": "online", 00:17:31.619 "raid_level": "raid1", 00:17:31.619 "superblock": true, 00:17:31.619 "num_base_bdevs": 2, 00:17:31.619 "num_base_bdevs_discovered": 2, 00:17:31.619 "num_base_bdevs_operational": 2, 00:17:31.619 "base_bdevs_list": [ 00:17:31.619 { 00:17:31.619 "name": "spare", 00:17:31.619 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:31.619 "is_configured": true, 00:17:31.619 "data_offset": 2048, 00:17:31.619 "data_size": 63488 00:17:31.619 }, 00:17:31.619 { 00:17:31.619 "name": "BaseBdev2", 00:17:31.619 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:31.619 "is_configured": true, 00:17:31.619 "data_offset": 2048, 00:17:31.619 "data_size": 63488 00:17:31.619 } 00:17:31.619 ] 00:17:31.619 }' 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.619 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.187 "name": "raid_bdev1", 00:17:32.187 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:32.187 "strip_size_kb": 0, 00:17:32.187 "state": "online", 00:17:32.187 "raid_level": "raid1", 00:17:32.187 "superblock": true, 00:17:32.187 "num_base_bdevs": 2, 00:17:32.187 "num_base_bdevs_discovered": 2, 00:17:32.187 "num_base_bdevs_operational": 2, 00:17:32.187 "base_bdevs_list": [ 00:17:32.187 { 00:17:32.187 "name": "spare", 00:17:32.187 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:32.187 "is_configured": true, 00:17:32.187 "data_offset": 2048, 00:17:32.187 "data_size": 63488 00:17:32.187 }, 00:17:32.187 { 00:17:32.187 "name": "BaseBdev2", 00:17:32.187 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:32.187 "is_configured": true, 00:17:32.187 "data_offset": 2048, 00:17:32.187 "data_size": 63488 00:17:32.187 } 00:17:32.187 ] 00:17:32.187 }' 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.187 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.446 [2024-11-27 14:17:02.705411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.446 "name": "raid_bdev1", 00:17:32.446 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:32.446 "strip_size_kb": 0, 00:17:32.446 "state": "online", 00:17:32.446 "raid_level": "raid1", 00:17:32.446 "superblock": true, 00:17:32.446 "num_base_bdevs": 2, 00:17:32.446 "num_base_bdevs_discovered": 1, 00:17:32.446 "num_base_bdevs_operational": 1, 00:17:32.446 "base_bdevs_list": [ 00:17:32.446 { 00:17:32.446 "name": null, 00:17:32.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.446 "is_configured": false, 00:17:32.446 "data_offset": 0, 00:17:32.446 "data_size": 63488 00:17:32.446 }, 00:17:32.446 { 00:17:32.446 "name": "BaseBdev2", 00:17:32.446 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:32.446 "is_configured": true, 00:17:32.446 "data_offset": 2048, 00:17:32.446 "data_size": 63488 00:17:32.446 } 00:17:32.446 ] 00:17:32.446 }' 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.446 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.705 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.705 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.705 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.705 [2024-11-27 14:17:03.193674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.705 [2024-11-27 14:17:03.193948] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.705 [2024-11-27 14:17:03.193971] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:32.705 [2024-11-27 14:17:03.194022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.705 [2024-11-27 14:17:03.210721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:17:32.705 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.705 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:32.705 [2024-11-27 14:17:03.213217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.082 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.082 "name": "raid_bdev1", 00:17:34.082 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:34.082 "strip_size_kb": 0, 00:17:34.082 "state": "online", 00:17:34.083 "raid_level": "raid1", 00:17:34.083 "superblock": true, 00:17:34.083 "num_base_bdevs": 2, 00:17:34.083 "num_base_bdevs_discovered": 2, 00:17:34.083 "num_base_bdevs_operational": 2, 00:17:34.083 "process": { 00:17:34.083 "type": "rebuild", 00:17:34.083 "target": "spare", 00:17:34.083 "progress": { 00:17:34.083 "blocks": 20480, 00:17:34.083 "percent": 32 00:17:34.083 } 00:17:34.083 }, 00:17:34.083 "base_bdevs_list": [ 00:17:34.083 { 00:17:34.083 "name": "spare", 00:17:34.083 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:34.083 "is_configured": true, 00:17:34.083 "data_offset": 2048, 00:17:34.083 "data_size": 63488 00:17:34.083 }, 00:17:34.083 { 00:17:34.083 "name": "BaseBdev2", 00:17:34.083 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:34.083 "is_configured": true, 00:17:34.083 "data_offset": 2048, 00:17:34.083 "data_size": 63488 00:17:34.083 } 00:17:34.083 ] 00:17:34.083 }' 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.083 [2024-11-27 14:17:04.378707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.083 [2024-11-27 14:17:04.422644] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:34.083 [2024-11-27 14:17:04.422888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.083 [2024-11-27 14:17:04.423119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.083 [2024-11-27 14:17:04.423172] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.083 "name": "raid_bdev1", 00:17:34.083 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:34.083 "strip_size_kb": 0, 00:17:34.083 "state": "online", 00:17:34.083 "raid_level": "raid1", 00:17:34.083 "superblock": true, 00:17:34.083 "num_base_bdevs": 2, 00:17:34.083 "num_base_bdevs_discovered": 1, 00:17:34.083 "num_base_bdevs_operational": 1, 00:17:34.083 "base_bdevs_list": [ 00:17:34.083 { 00:17:34.083 "name": null, 00:17:34.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.083 "is_configured": false, 00:17:34.083 "data_offset": 0, 00:17:34.083 "data_size": 63488 00:17:34.083 }, 00:17:34.083 { 00:17:34.083 "name": "BaseBdev2", 00:17:34.083 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:34.083 "is_configured": true, 00:17:34.083 "data_offset": 2048, 00:17:34.083 "data_size": 63488 00:17:34.083 } 00:17:34.083 ] 00:17:34.083 }' 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.083 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.650 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.650 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.650 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.650 [2024-11-27 14:17:04.978445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.650 [2024-11-27 14:17:04.978556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.650 [2024-11-27 14:17:04.978591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:34.650 [2024-11-27 14:17:04.978605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.650 [2024-11-27 14:17:04.979357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.650 [2024-11-27 14:17:04.979388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.650 [2024-11-27 14:17:04.979539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.650 [2024-11-27 14:17:04.979589] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:34.650 [2024-11-27 14:17:04.979620] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:34.650 [2024-11-27 14:17:04.979648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.650 [2024-11-27 14:17:04.997367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:17:34.650 spare 00:17:34.650 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.650 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:34.650 [2024-11-27 14:17:05.000032] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.586 "name": "raid_bdev1", 00:17:35.586 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:35.586 "strip_size_kb": 0, 00:17:35.586 "state": "online", 00:17:35.586 "raid_level": "raid1", 00:17:35.586 "superblock": true, 00:17:35.586 "num_base_bdevs": 2, 00:17:35.586 "num_base_bdevs_discovered": 2, 00:17:35.586 "num_base_bdevs_operational": 2, 00:17:35.586 "process": { 00:17:35.586 "type": "rebuild", 00:17:35.586 "target": "spare", 00:17:35.586 "progress": { 00:17:35.586 "blocks": 20480, 00:17:35.586 "percent": 32 00:17:35.586 } 00:17:35.586 }, 00:17:35.586 "base_bdevs_list": [ 00:17:35.586 { 00:17:35.586 "name": "spare", 00:17:35.586 "uuid": "4330efc5-ddeb-59f5-8af8-70a0a343a840", 00:17:35.586 "is_configured": true, 00:17:35.586 "data_offset": 2048, 00:17:35.586 "data_size": 63488 00:17:35.586 }, 00:17:35.586 { 00:17:35.586 "name": "BaseBdev2", 00:17:35.586 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:35.586 "is_configured": true, 00:17:35.586 "data_offset": 2048, 00:17:35.586 "data_size": 63488 00:17:35.586 } 00:17:35.586 ] 00:17:35.586 }' 00:17:35.586 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.921 [2024-11-27 14:17:06.169912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.921 [2024-11-27 14:17:06.209801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:35.921 [2024-11-27 14:17:06.209933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.921 [2024-11-27 14:17:06.209973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.921 [2024-11-27 14:17:06.209993] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.921 "name": "raid_bdev1", 00:17:35.921 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:35.921 "strip_size_kb": 0, 00:17:35.921 "state": "online", 00:17:35.921 "raid_level": "raid1", 00:17:35.921 "superblock": true, 00:17:35.921 "num_base_bdevs": 2, 00:17:35.921 "num_base_bdevs_discovered": 1, 00:17:35.921 "num_base_bdevs_operational": 1, 00:17:35.921 "base_bdevs_list": [ 00:17:35.921 { 00:17:35.921 "name": null, 00:17:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.921 "is_configured": false, 00:17:35.921 "data_offset": 0, 00:17:35.921 "data_size": 63488 00:17:35.921 }, 00:17:35.921 { 00:17:35.921 "name": "BaseBdev2", 00:17:35.921 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:35.921 "is_configured": true, 00:17:35.921 "data_offset": 2048, 00:17:35.921 "data_size": 63488 00:17:35.921 } 00:17:35.921 ] 00:17:35.921 }' 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.921 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.488 "name": "raid_bdev1", 00:17:36.488 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:36.488 "strip_size_kb": 0, 00:17:36.488 "state": "online", 00:17:36.488 "raid_level": "raid1", 00:17:36.488 "superblock": true, 00:17:36.488 "num_base_bdevs": 2, 00:17:36.488 "num_base_bdevs_discovered": 1, 00:17:36.488 "num_base_bdevs_operational": 1, 00:17:36.488 "base_bdevs_list": [ 00:17:36.488 { 00:17:36.488 "name": null, 00:17:36.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.488 "is_configured": false, 00:17:36.488 "data_offset": 0, 00:17:36.488 "data_size": 63488 00:17:36.488 }, 00:17:36.488 { 00:17:36.488 "name": "BaseBdev2", 00:17:36.488 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:36.488 "is_configured": true, 00:17:36.488 "data_offset": 2048, 00:17:36.488 "data_size": 63488 00:17:36.488 } 00:17:36.488 ] 00:17:36.488 }' 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.488 [2024-11-27 14:17:06.946615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.488 [2024-11-27 14:17:06.946826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.488 [2024-11-27 14:17:06.946874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:36.488 [2024-11-27 14:17:06.946897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.488 [2024-11-27 14:17:06.947450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.488 [2024-11-27 14:17:06.947482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.488 [2024-11-27 14:17:06.947580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:36.488 [2024-11-27 14:17:06.947609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.488 [2024-11-27 14:17:06.947624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:36.488 [2024-11-27 14:17:06.947639] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:36.488 BaseBdev1 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.488 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.864 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.864 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.864 "name": "raid_bdev1", 00:17:37.864 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:37.864 "strip_size_kb": 0, 00:17:37.864 "state": "online", 00:17:37.864 "raid_level": "raid1", 00:17:37.864 "superblock": true, 00:17:37.864 "num_base_bdevs": 2, 00:17:37.864 "num_base_bdevs_discovered": 1, 00:17:37.864 "num_base_bdevs_operational": 1, 00:17:37.864 "base_bdevs_list": [ 00:17:37.864 { 00:17:37.864 "name": null, 00:17:37.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.864 "is_configured": false, 00:17:37.864 "data_offset": 0, 00:17:37.864 "data_size": 63488 00:17:37.864 }, 00:17:37.864 { 00:17:37.864 "name": "BaseBdev2", 00:17:37.864 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:37.864 "is_configured": true, 00:17:37.864 "data_offset": 2048, 00:17:37.864 "data_size": 63488 00:17:37.864 } 00:17:37.864 ] 00:17:37.864 }' 00:17:37.864 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.864 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.123 "name": "raid_bdev1", 00:17:38.123 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:38.123 "strip_size_kb": 0, 00:17:38.123 "state": "online", 00:17:38.123 "raid_level": "raid1", 00:17:38.123 "superblock": true, 00:17:38.123 "num_base_bdevs": 2, 00:17:38.123 "num_base_bdevs_discovered": 1, 00:17:38.123 "num_base_bdevs_operational": 1, 00:17:38.123 "base_bdevs_list": [ 00:17:38.123 { 00:17:38.123 "name": null, 00:17:38.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.123 "is_configured": false, 00:17:38.123 "data_offset": 0, 00:17:38.123 "data_size": 63488 00:17:38.123 }, 00:17:38.123 { 00:17:38.123 "name": "BaseBdev2", 00:17:38.123 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:38.123 "is_configured": true, 00:17:38.123 "data_offset": 2048, 00:17:38.123 "data_size": 63488 00:17:38.123 } 00:17:38.123 ] 00:17:38.123 }' 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.123 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.123 [2024-11-27 14:17:08.627570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.123 [2024-11-27 14:17:08.627806] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.123 [2024-11-27 14:17:08.627825] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.382 request: 00:17:38.382 { 00:17:38.382 "base_bdev": "BaseBdev1", 00:17:38.382 "raid_bdev": "raid_bdev1", 00:17:38.382 "method": "bdev_raid_add_base_bdev", 00:17:38.383 "req_id": 1 00:17:38.383 } 00:17:38.383 Got JSON-RPC error response 00:17:38.383 response: 00:17:38.383 { 00:17:38.383 "code": -22, 00:17:38.383 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:38.383 } 00:17:38.383 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:38.383 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:38.383 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.383 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.383 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.383 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.315 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.315 "name": "raid_bdev1", 00:17:39.315 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:39.315 "strip_size_kb": 0, 00:17:39.315 "state": "online", 00:17:39.315 "raid_level": "raid1", 00:17:39.315 "superblock": true, 00:17:39.315 "num_base_bdevs": 2, 00:17:39.315 "num_base_bdevs_discovered": 1, 00:17:39.315 "num_base_bdevs_operational": 1, 00:17:39.315 "base_bdevs_list": [ 00:17:39.315 { 00:17:39.315 "name": null, 00:17:39.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.316 "is_configured": false, 00:17:39.316 "data_offset": 0, 00:17:39.316 "data_size": 63488 00:17:39.316 }, 00:17:39.316 { 00:17:39.316 "name": "BaseBdev2", 00:17:39.316 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:39.316 "is_configured": true, 00:17:39.316 "data_offset": 2048, 00:17:39.316 "data_size": 63488 00:17:39.316 } 00:17:39.316 ] 00:17:39.316 }' 00:17:39.316 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.316 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.880 "name": "raid_bdev1", 00:17:39.880 "uuid": "c78dfae5-9663-4bcc-9e28-a06bd888546e", 00:17:39.880 "strip_size_kb": 0, 00:17:39.880 "state": "online", 00:17:39.880 "raid_level": "raid1", 00:17:39.880 "superblock": true, 00:17:39.880 "num_base_bdevs": 2, 00:17:39.880 "num_base_bdevs_discovered": 1, 00:17:39.880 "num_base_bdevs_operational": 1, 00:17:39.880 "base_bdevs_list": [ 00:17:39.880 { 00:17:39.880 "name": null, 00:17:39.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.880 "is_configured": false, 00:17:39.880 "data_offset": 0, 00:17:39.880 "data_size": 63488 00:17:39.880 }, 00:17:39.880 { 00:17:39.880 "name": "BaseBdev2", 00:17:39.880 "uuid": "2a689463-a5c3-5c39-ac02-4c87aa26251b", 00:17:39.880 "is_configured": true, 00:17:39.880 "data_offset": 2048, 00:17:39.880 "data_size": 63488 00:17:39.880 } 00:17:39.880 ] 00:17:39.880 }' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77284 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77284 ']' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77284 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77284 00:17:39.880 killing process with pid 77284 00:17:39.880 Received shutdown signal, test time was about 19.299501 seconds 00:17:39.880 00:17:39.880 Latency(us) 00:17:39.880 [2024-11-27T14:17:10.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.880 [2024-11-27T14:17:10.393Z] =================================================================================================================== 00:17:39.880 [2024-11-27T14:17:10.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77284' 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77284 00:17:39.880 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77284 00:17:39.880 [2024-11-27 14:17:10.334813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.880 [2024-11-27 14:17:10.334994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.880 [2024-11-27 14:17:10.335075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.880 [2024-11-27 14:17:10.335091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:40.138 [2024-11-27 14:17:10.543559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:41.512 00:17:41.512 real 0m22.626s 00:17:41.512 user 0m30.494s 00:17:41.512 sys 0m2.028s 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.512 ************************************ 00:17:41.512 END TEST raid_rebuild_test_sb_io 00:17:41.512 ************************************ 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.512 14:17:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:41.512 14:17:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:17:41.512 14:17:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:41.512 14:17:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.512 14:17:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.512 ************************************ 00:17:41.512 START TEST raid_rebuild_test 00:17:41.512 ************************************ 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.512 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78003 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78003 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78003 ']' 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.513 14:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.513 [2024-11-27 14:17:11.805979] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:17:41.513 [2024-11-27 14:17:11.806339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78003 ] 00:17:41.513 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:41.513 Zero copy mechanism will not be used. 00:17:41.513 [2024-11-27 14:17:11.979496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.771 [2024-11-27 14:17:12.112404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.029 [2024-11-27 14:17:12.321825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.029 [2024-11-27 14:17:12.322175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 BaseBdev1_malloc 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 [2024-11-27 14:17:12.907723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:42.596 [2024-11-27 14:17:12.907809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.596 [2024-11-27 14:17:12.907858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:42.596 [2024-11-27 14:17:12.907881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.596 [2024-11-27 14:17:12.910735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.596 [2024-11-27 14:17:12.910796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.596 BaseBdev1 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 BaseBdev2_malloc 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 [2024-11-27 14:17:12.960418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:42.596 [2024-11-27 14:17:12.960504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.596 [2024-11-27 14:17:12.960535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:42.596 [2024-11-27 14:17:12.960551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.596 [2024-11-27 14:17:12.963434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.596 [2024-11-27 14:17:12.963642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.596 BaseBdev2 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 BaseBdev3_malloc 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 [2024-11-27 14:17:13.022162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:42.596 [2024-11-27 14:17:13.022231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.596 [2024-11-27 14:17:13.022261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:42.596 [2024-11-27 14:17:13.022279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.596 [2024-11-27 14:17:13.025080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.596 [2024-11-27 14:17:13.025280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:42.596 BaseBdev3 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 BaseBdev4_malloc 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.596 [2024-11-27 14:17:13.078961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:42.596 [2024-11-27 14:17:13.079047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.596 [2024-11-27 14:17:13.079076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:42.596 [2024-11-27 14:17:13.079093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.596 [2024-11-27 14:17:13.081769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.596 [2024-11-27 14:17:13.081989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:42.596 BaseBdev4 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.596 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.855 spare_malloc 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.855 spare_delay 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.855 [2024-11-27 14:17:13.140317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.855 [2024-11-27 14:17:13.140409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.855 [2024-11-27 14:17:13.140434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:42.855 [2024-11-27 14:17:13.140450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.855 [2024-11-27 14:17:13.143308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.855 [2024-11-27 14:17:13.143372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.855 spare 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.855 [2024-11-27 14:17:13.152354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.855 [2024-11-27 14:17:13.154783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.855 [2024-11-27 14:17:13.154872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.855 [2024-11-27 14:17:13.154959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.855 [2024-11-27 14:17:13.155098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:42.855 [2024-11-27 14:17:13.155121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:42.855 [2024-11-27 14:17:13.155484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:42.855 [2024-11-27 14:17:13.155674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.855 [2024-11-27 14:17:13.155692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.855 [2024-11-27 14:17:13.155872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.855 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.855 "name": "raid_bdev1", 00:17:42.855 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:42.855 "strip_size_kb": 0, 00:17:42.855 "state": "online", 00:17:42.855 "raid_level": "raid1", 00:17:42.855 "superblock": false, 00:17:42.855 "num_base_bdevs": 4, 00:17:42.855 "num_base_bdevs_discovered": 4, 00:17:42.855 "num_base_bdevs_operational": 4, 00:17:42.855 "base_bdevs_list": [ 00:17:42.855 { 00:17:42.855 "name": "BaseBdev1", 00:17:42.855 "uuid": "c35e88bd-10cf-5109-a480-50850d41234a", 00:17:42.855 "is_configured": true, 00:17:42.855 "data_offset": 0, 00:17:42.855 "data_size": 65536 00:17:42.855 }, 00:17:42.855 { 00:17:42.855 "name": "BaseBdev2", 00:17:42.855 "uuid": "7f974015-f769-5f6b-9090-ae5c162eb025", 00:17:42.855 "is_configured": true, 00:17:42.855 "data_offset": 0, 00:17:42.855 "data_size": 65536 00:17:42.855 }, 00:17:42.855 { 00:17:42.855 "name": "BaseBdev3", 00:17:42.855 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:42.855 "is_configured": true, 00:17:42.855 "data_offset": 0, 00:17:42.855 "data_size": 65536 00:17:42.855 }, 00:17:42.855 { 00:17:42.855 "name": "BaseBdev4", 00:17:42.855 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:42.855 "is_configured": true, 00:17:42.855 "data_offset": 0, 00:17:42.855 "data_size": 65536 00:17:42.856 } 00:17:42.856 ] 00:17:42.856 }' 00:17:42.856 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.856 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.422 [2024-11-27 14:17:13.660985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:43.422 14:17:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:43.681 [2024-11-27 14:17:14.024714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:43.681 /dev/nbd0 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.681 1+0 records in 00:17:43.681 1+0 records out 00:17:43.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037628 s, 10.9 MB/s 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:43.681 14:17:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:53.657 65536+0 records in 00:17:53.657 65536+0 records out 00:17:53.657 33554432 bytes (34 MB, 32 MiB) copied, 8.42627 s, 4.0 MB/s 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:53.657 [2024-11-27 14:17:22.753561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.657 [2024-11-27 14:17:22.785634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.657 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.658 "name": "raid_bdev1", 00:17:53.658 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:53.658 "strip_size_kb": 0, 00:17:53.658 "state": "online", 00:17:53.658 "raid_level": "raid1", 00:17:53.658 "superblock": false, 00:17:53.658 "num_base_bdevs": 4, 00:17:53.658 "num_base_bdevs_discovered": 3, 00:17:53.658 "num_base_bdevs_operational": 3, 00:17:53.658 "base_bdevs_list": [ 00:17:53.658 { 00:17:53.658 "name": null, 00:17:53.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.658 "is_configured": false, 00:17:53.658 "data_offset": 0, 00:17:53.658 "data_size": 65536 00:17:53.658 }, 00:17:53.658 { 00:17:53.658 "name": "BaseBdev2", 00:17:53.658 "uuid": "7f974015-f769-5f6b-9090-ae5c162eb025", 00:17:53.658 "is_configured": true, 00:17:53.658 "data_offset": 0, 00:17:53.658 "data_size": 65536 00:17:53.658 }, 00:17:53.658 { 00:17:53.658 "name": "BaseBdev3", 00:17:53.658 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:53.658 "is_configured": true, 00:17:53.658 "data_offset": 0, 00:17:53.658 "data_size": 65536 00:17:53.658 }, 00:17:53.658 { 00:17:53.658 "name": "BaseBdev4", 00:17:53.658 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:53.658 "is_configured": true, 00:17:53.658 "data_offset": 0, 00:17:53.658 "data_size": 65536 00:17:53.658 } 00:17:53.658 ] 00:17:53.658 }' 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.658 14:17:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 14:17:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.658 14:17:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.658 14:17:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 [2024-11-27 14:17:23.293808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.658 [2024-11-27 14:17:23.308463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:17:53.658 14:17:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.658 14:17:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:53.658 [2024-11-27 14:17:23.311059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.916 "name": "raid_bdev1", 00:17:53.916 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:53.916 "strip_size_kb": 0, 00:17:53.916 "state": "online", 00:17:53.916 "raid_level": "raid1", 00:17:53.916 "superblock": false, 00:17:53.916 "num_base_bdevs": 4, 00:17:53.916 "num_base_bdevs_discovered": 4, 00:17:53.916 "num_base_bdevs_operational": 4, 00:17:53.916 "process": { 00:17:53.916 "type": "rebuild", 00:17:53.916 "target": "spare", 00:17:53.916 "progress": { 00:17:53.916 "blocks": 20480, 00:17:53.916 "percent": 31 00:17:53.916 } 00:17:53.916 }, 00:17:53.916 "base_bdevs_list": [ 00:17:53.916 { 00:17:53.916 "name": "spare", 00:17:53.916 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:53.916 "is_configured": true, 00:17:53.916 "data_offset": 0, 00:17:53.916 "data_size": 65536 00:17:53.916 }, 00:17:53.916 { 00:17:53.916 "name": "BaseBdev2", 00:17:53.916 "uuid": "7f974015-f769-5f6b-9090-ae5c162eb025", 00:17:53.916 "is_configured": true, 00:17:53.916 "data_offset": 0, 00:17:53.916 "data_size": 65536 00:17:53.916 }, 00:17:53.916 { 00:17:53.916 "name": "BaseBdev3", 00:17:53.916 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:53.916 "is_configured": true, 00:17:53.916 "data_offset": 0, 00:17:53.916 "data_size": 65536 00:17:53.916 }, 00:17:53.916 { 00:17:53.916 "name": "BaseBdev4", 00:17:53.916 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:53.916 "is_configured": true, 00:17:53.916 "data_offset": 0, 00:17:53.916 "data_size": 65536 00:17:53.916 } 00:17:53.916 ] 00:17:53.916 }' 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.916 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.174 [2024-11-27 14:17:24.476321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.174 [2024-11-27 14:17:24.520091] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.174 [2024-11-27 14:17:24.520173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.174 [2024-11-27 14:17:24.520198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.174 [2024-11-27 14:17:24.520212] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.174 "name": "raid_bdev1", 00:17:54.174 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:54.174 "strip_size_kb": 0, 00:17:54.174 "state": "online", 00:17:54.174 "raid_level": "raid1", 00:17:54.174 "superblock": false, 00:17:54.174 "num_base_bdevs": 4, 00:17:54.174 "num_base_bdevs_discovered": 3, 00:17:54.174 "num_base_bdevs_operational": 3, 00:17:54.174 "base_bdevs_list": [ 00:17:54.174 { 00:17:54.174 "name": null, 00:17:54.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.174 "is_configured": false, 00:17:54.174 "data_offset": 0, 00:17:54.174 "data_size": 65536 00:17:54.174 }, 00:17:54.174 { 00:17:54.174 "name": "BaseBdev2", 00:17:54.174 "uuid": "7f974015-f769-5f6b-9090-ae5c162eb025", 00:17:54.174 "is_configured": true, 00:17:54.174 "data_offset": 0, 00:17:54.174 "data_size": 65536 00:17:54.174 }, 00:17:54.174 { 00:17:54.174 "name": "BaseBdev3", 00:17:54.174 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:54.174 "is_configured": true, 00:17:54.174 "data_offset": 0, 00:17:54.174 "data_size": 65536 00:17:54.174 }, 00:17:54.174 { 00:17:54.174 "name": "BaseBdev4", 00:17:54.174 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:54.174 "is_configured": true, 00:17:54.174 "data_offset": 0, 00:17:54.174 "data_size": 65536 00:17:54.174 } 00:17:54.174 ] 00:17:54.174 }' 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.174 14:17:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.740 "name": "raid_bdev1", 00:17:54.740 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:54.740 "strip_size_kb": 0, 00:17:54.740 "state": "online", 00:17:54.740 "raid_level": "raid1", 00:17:54.740 "superblock": false, 00:17:54.740 "num_base_bdevs": 4, 00:17:54.740 "num_base_bdevs_discovered": 3, 00:17:54.740 "num_base_bdevs_operational": 3, 00:17:54.740 "base_bdevs_list": [ 00:17:54.740 { 00:17:54.740 "name": null, 00:17:54.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.740 "is_configured": false, 00:17:54.740 "data_offset": 0, 00:17:54.740 "data_size": 65536 00:17:54.740 }, 00:17:54.740 { 00:17:54.740 "name": "BaseBdev2", 00:17:54.740 "uuid": "7f974015-f769-5f6b-9090-ae5c162eb025", 00:17:54.740 "is_configured": true, 00:17:54.740 "data_offset": 0, 00:17:54.740 "data_size": 65536 00:17:54.740 }, 00:17:54.740 { 00:17:54.740 "name": "BaseBdev3", 00:17:54.740 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:54.740 "is_configured": true, 00:17:54.740 "data_offset": 0, 00:17:54.740 "data_size": 65536 00:17:54.740 }, 00:17:54.740 { 00:17:54.740 "name": "BaseBdev4", 00:17:54.740 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:54.740 "is_configured": true, 00:17:54.740 "data_offset": 0, 00:17:54.740 "data_size": 65536 00:17:54.740 } 00:17:54.740 ] 00:17:54.740 }' 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.740 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.740 [2024-11-27 14:17:25.248535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.998 [2024-11-27 14:17:25.262719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:54.998 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.998 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:54.998 [2024-11-27 14:17:25.265478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.932 "name": "raid_bdev1", 00:17:55.932 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:55.932 "strip_size_kb": 0, 00:17:55.932 "state": "online", 00:17:55.932 "raid_level": "raid1", 00:17:55.932 "superblock": false, 00:17:55.932 "num_base_bdevs": 4, 00:17:55.932 "num_base_bdevs_discovered": 4, 00:17:55.932 "num_base_bdevs_operational": 4, 00:17:55.932 "process": { 00:17:55.932 "type": "rebuild", 00:17:55.932 "target": "spare", 00:17:55.932 "progress": { 00:17:55.932 "blocks": 20480, 00:17:55.932 "percent": 31 00:17:55.932 } 00:17:55.932 }, 00:17:55.932 "base_bdevs_list": [ 00:17:55.932 { 00:17:55.932 "name": "spare", 00:17:55.932 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:55.932 "is_configured": true, 00:17:55.932 "data_offset": 0, 00:17:55.932 "data_size": 65536 00:17:55.932 }, 00:17:55.932 { 00:17:55.932 "name": "BaseBdev2", 00:17:55.932 "uuid": "7f974015-f769-5f6b-9090-ae5c162eb025", 00:17:55.932 "is_configured": true, 00:17:55.932 "data_offset": 0, 00:17:55.932 "data_size": 65536 00:17:55.932 }, 00:17:55.932 { 00:17:55.932 "name": "BaseBdev3", 00:17:55.932 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:55.932 "is_configured": true, 00:17:55.932 "data_offset": 0, 00:17:55.932 "data_size": 65536 00:17:55.932 }, 00:17:55.932 { 00:17:55.932 "name": "BaseBdev4", 00:17:55.932 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:55.932 "is_configured": true, 00:17:55.932 "data_offset": 0, 00:17:55.932 "data_size": 65536 00:17:55.932 } 00:17:55.932 ] 00:17:55.932 }' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.932 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.932 [2024-11-27 14:17:26.431041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.190 [2024-11-27 14:17:26.475000] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.190 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.190 "name": "raid_bdev1", 00:17:56.190 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:56.190 "strip_size_kb": 0, 00:17:56.190 "state": "online", 00:17:56.190 "raid_level": "raid1", 00:17:56.190 "superblock": false, 00:17:56.190 "num_base_bdevs": 4, 00:17:56.190 "num_base_bdevs_discovered": 3, 00:17:56.190 "num_base_bdevs_operational": 3, 00:17:56.190 "process": { 00:17:56.190 "type": "rebuild", 00:17:56.190 "target": "spare", 00:17:56.190 "progress": { 00:17:56.190 "blocks": 24576, 00:17:56.190 "percent": 37 00:17:56.190 } 00:17:56.190 }, 00:17:56.190 "base_bdevs_list": [ 00:17:56.190 { 00:17:56.190 "name": "spare", 00:17:56.190 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:56.190 "is_configured": true, 00:17:56.190 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 }, 00:17:56.191 { 00:17:56.191 "name": null, 00:17:56.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.191 "is_configured": false, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 }, 00:17:56.191 { 00:17:56.191 "name": "BaseBdev3", 00:17:56.191 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:56.191 "is_configured": true, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 }, 00:17:56.191 { 00:17:56.191 "name": "BaseBdev4", 00:17:56.191 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:56.191 "is_configured": true, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 } 00:17:56.191 ] 00:17:56.191 }' 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.191 "name": "raid_bdev1", 00:17:56.191 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:56.191 "strip_size_kb": 0, 00:17:56.191 "state": "online", 00:17:56.191 "raid_level": "raid1", 00:17:56.191 "superblock": false, 00:17:56.191 "num_base_bdevs": 4, 00:17:56.191 "num_base_bdevs_discovered": 3, 00:17:56.191 "num_base_bdevs_operational": 3, 00:17:56.191 "process": { 00:17:56.191 "type": "rebuild", 00:17:56.191 "target": "spare", 00:17:56.191 "progress": { 00:17:56.191 "blocks": 26624, 00:17:56.191 "percent": 40 00:17:56.191 } 00:17:56.191 }, 00:17:56.191 "base_bdevs_list": [ 00:17:56.191 { 00:17:56.191 "name": "spare", 00:17:56.191 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:56.191 "is_configured": true, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 }, 00:17:56.191 { 00:17:56.191 "name": null, 00:17:56.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.191 "is_configured": false, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 }, 00:17:56.191 { 00:17:56.191 "name": "BaseBdev3", 00:17:56.191 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:56.191 "is_configured": true, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 }, 00:17:56.191 { 00:17:56.191 "name": "BaseBdev4", 00:17:56.191 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:56.191 "is_configured": true, 00:17:56.191 "data_offset": 0, 00:17:56.191 "data_size": 65536 00:17:56.191 } 00:17:56.191 ] 00:17:56.191 }' 00:17:56.191 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.449 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.449 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.449 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.449 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.385 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.385 "name": "raid_bdev1", 00:17:57.385 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:57.385 "strip_size_kb": 0, 00:17:57.385 "state": "online", 00:17:57.385 "raid_level": "raid1", 00:17:57.385 "superblock": false, 00:17:57.385 "num_base_bdevs": 4, 00:17:57.385 "num_base_bdevs_discovered": 3, 00:17:57.385 "num_base_bdevs_operational": 3, 00:17:57.385 "process": { 00:17:57.385 "type": "rebuild", 00:17:57.385 "target": "spare", 00:17:57.385 "progress": { 00:17:57.385 "blocks": 51200, 00:17:57.385 "percent": 78 00:17:57.385 } 00:17:57.385 }, 00:17:57.385 "base_bdevs_list": [ 00:17:57.385 { 00:17:57.385 "name": "spare", 00:17:57.385 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:57.386 "is_configured": true, 00:17:57.386 "data_offset": 0, 00:17:57.386 "data_size": 65536 00:17:57.386 }, 00:17:57.386 { 00:17:57.386 "name": null, 00:17:57.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.386 "is_configured": false, 00:17:57.386 "data_offset": 0, 00:17:57.386 "data_size": 65536 00:17:57.386 }, 00:17:57.386 { 00:17:57.386 "name": "BaseBdev3", 00:17:57.386 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:57.386 "is_configured": true, 00:17:57.386 "data_offset": 0, 00:17:57.386 "data_size": 65536 00:17:57.386 }, 00:17:57.386 { 00:17:57.386 "name": "BaseBdev4", 00:17:57.386 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:57.386 "is_configured": true, 00:17:57.386 "data_offset": 0, 00:17:57.386 "data_size": 65536 00:17:57.386 } 00:17:57.386 ] 00:17:57.386 }' 00:17:57.386 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.644 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.644 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.644 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.644 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.211 [2024-11-27 14:17:28.490717] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:58.211 [2024-11-27 14:17:28.490819] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:58.211 [2024-11-27 14:17:28.490905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.470 14:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.729 14:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.729 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.729 "name": "raid_bdev1", 00:17:58.729 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:58.729 "strip_size_kb": 0, 00:17:58.729 "state": "online", 00:17:58.729 "raid_level": "raid1", 00:17:58.729 "superblock": false, 00:17:58.729 "num_base_bdevs": 4, 00:17:58.729 "num_base_bdevs_discovered": 3, 00:17:58.729 "num_base_bdevs_operational": 3, 00:17:58.729 "base_bdevs_list": [ 00:17:58.729 { 00:17:58.729 "name": "spare", 00:17:58.729 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:58.729 "is_configured": true, 00:17:58.729 "data_offset": 0, 00:17:58.729 "data_size": 65536 00:17:58.729 }, 00:17:58.729 { 00:17:58.729 "name": null, 00:17:58.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.729 "is_configured": false, 00:17:58.729 "data_offset": 0, 00:17:58.729 "data_size": 65536 00:17:58.729 }, 00:17:58.729 { 00:17:58.729 "name": "BaseBdev3", 00:17:58.729 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:58.729 "is_configured": true, 00:17:58.729 "data_offset": 0, 00:17:58.729 "data_size": 65536 00:17:58.729 }, 00:17:58.729 { 00:17:58.729 "name": "BaseBdev4", 00:17:58.729 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:58.729 "is_configured": true, 00:17:58.729 "data_offset": 0, 00:17:58.729 "data_size": 65536 00:17:58.729 } 00:17:58.729 ] 00:17:58.729 }' 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.730 "name": "raid_bdev1", 00:17:58.730 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:58.730 "strip_size_kb": 0, 00:17:58.730 "state": "online", 00:17:58.730 "raid_level": "raid1", 00:17:58.730 "superblock": false, 00:17:58.730 "num_base_bdevs": 4, 00:17:58.730 "num_base_bdevs_discovered": 3, 00:17:58.730 "num_base_bdevs_operational": 3, 00:17:58.730 "base_bdevs_list": [ 00:17:58.730 { 00:17:58.730 "name": "spare", 00:17:58.730 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:58.730 "is_configured": true, 00:17:58.730 "data_offset": 0, 00:17:58.730 "data_size": 65536 00:17:58.730 }, 00:17:58.730 { 00:17:58.730 "name": null, 00:17:58.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.730 "is_configured": false, 00:17:58.730 "data_offset": 0, 00:17:58.730 "data_size": 65536 00:17:58.730 }, 00:17:58.730 { 00:17:58.730 "name": "BaseBdev3", 00:17:58.730 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:58.730 "is_configured": true, 00:17:58.730 "data_offset": 0, 00:17:58.730 "data_size": 65536 00:17:58.730 }, 00:17:58.730 { 00:17:58.730 "name": "BaseBdev4", 00:17:58.730 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:58.730 "is_configured": true, 00:17:58.730 "data_offset": 0, 00:17:58.730 "data_size": 65536 00:17:58.730 } 00:17:58.730 ] 00:17:58.730 }' 00:17:58.730 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.988 "name": "raid_bdev1", 00:17:58.988 "uuid": "7dd65399-697b-4929-9755-e45bd499c3a7", 00:17:58.988 "strip_size_kb": 0, 00:17:58.988 "state": "online", 00:17:58.988 "raid_level": "raid1", 00:17:58.988 "superblock": false, 00:17:58.988 "num_base_bdevs": 4, 00:17:58.988 "num_base_bdevs_discovered": 3, 00:17:58.988 "num_base_bdevs_operational": 3, 00:17:58.988 "base_bdevs_list": [ 00:17:58.988 { 00:17:58.988 "name": "spare", 00:17:58.988 "uuid": "a33b0301-4d69-517d-826a-1854f2bce9e3", 00:17:58.988 "is_configured": true, 00:17:58.988 "data_offset": 0, 00:17:58.988 "data_size": 65536 00:17:58.988 }, 00:17:58.988 { 00:17:58.988 "name": null, 00:17:58.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.988 "is_configured": false, 00:17:58.988 "data_offset": 0, 00:17:58.988 "data_size": 65536 00:17:58.988 }, 00:17:58.988 { 00:17:58.988 "name": "BaseBdev3", 00:17:58.988 "uuid": "d51ab1e7-ce38-5414-930f-b3c219cc9621", 00:17:58.988 "is_configured": true, 00:17:58.988 "data_offset": 0, 00:17:58.988 "data_size": 65536 00:17:58.988 }, 00:17:58.988 { 00:17:58.988 "name": "BaseBdev4", 00:17:58.988 "uuid": "dbb4d50d-5f4d-5372-a2b7-03398ec747f6", 00:17:58.988 "is_configured": true, 00:17:58.988 "data_offset": 0, 00:17:58.988 "data_size": 65536 00:17:58.988 } 00:17:58.988 ] 00:17:58.988 }' 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.988 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.554 [2024-11-27 14:17:29.831175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.554 [2024-11-27 14:17:29.831340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.554 [2024-11-27 14:17:29.831549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.554 [2024-11-27 14:17:29.831783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.554 [2024-11-27 14:17:29.831956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.554 14:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:59.813 /dev/nbd0 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.813 1+0 records in 00:17:59.813 1+0 records out 00:17:59.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322082 s, 12.7 MB/s 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.813 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:00.072 /dev/nbd1 00:18:00.072 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.331 1+0 records in 00:18:00.331 1+0 records out 00:18:00.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329227 s, 12.4 MB/s 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.331 14:17:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:00.590 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.590 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.590 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.590 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.591 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.591 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.591 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:00.591 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.591 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.591 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78003 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78003 ']' 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78003 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:00.850 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.109 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78003 00:18:01.109 killing process with pid 78003 00:18:01.109 Received shutdown signal, test time was about 60.000000 seconds 00:18:01.109 00:18:01.109 Latency(us) 00:18:01.109 [2024-11-27T14:17:31.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.109 [2024-11-27T14:17:31.622Z] =================================================================================================================== 00:18:01.109 [2024-11-27T14:17:31.622Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.109 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.109 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.109 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78003' 00:18:01.109 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78003 00:18:01.109 [2024-11-27 14:17:31.385441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.109 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78003 00:18:01.376 [2024-11-27 14:17:31.840231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.766 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:02.766 00:18:02.766 real 0m21.234s 00:18:02.766 user 0m23.815s 00:18:02.766 sys 0m3.576s 00:18:02.766 ************************************ 00:18:02.766 END TEST raid_rebuild_test 00:18:02.766 ************************************ 00:18:02.766 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.766 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.766 14:17:32 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:18:02.766 14:17:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:02.766 14:17:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.766 14:17:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.766 ************************************ 00:18:02.766 START TEST raid_rebuild_test_sb 00:18:02.766 ************************************ 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78484 00:18:02.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78484 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78484 ']' 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.766 14:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.766 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:02.766 Zero copy mechanism will not be used. 00:18:02.766 [2024-11-27 14:17:33.136585] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:18:02.766 [2024-11-27 14:17:33.136862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78484 ] 00:18:03.025 [2024-11-27 14:17:33.330673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.025 [2024-11-27 14:17:33.465829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.283 [2024-11-27 14:17:33.678293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.283 [2024-11-27 14:17:33.678352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 BaseBdev1_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 [2024-11-27 14:17:34.117146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.851 [2024-11-27 14:17:34.117223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.851 [2024-11-27 14:17:34.117255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:03.851 [2024-11-27 14:17:34.117274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.851 [2024-11-27 14:17:34.120275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.851 [2024-11-27 14:17:34.120325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.851 BaseBdev1 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 BaseBdev2_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 [2024-11-27 14:17:34.175655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:03.851 [2024-11-27 14:17:34.175728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.851 [2024-11-27 14:17:34.175776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:03.851 [2024-11-27 14:17:34.175793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.851 [2024-11-27 14:17:34.178858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.851 [2024-11-27 14:17:34.178917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:03.851 BaseBdev2 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 BaseBdev3_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 [2024-11-27 14:17:34.247933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:03.851 [2024-11-27 14:17:34.248056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.851 [2024-11-27 14:17:34.248088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:03.851 [2024-11-27 14:17:34.248106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.851 [2024-11-27 14:17:34.251126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.851 [2024-11-27 14:17:34.251204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:03.851 BaseBdev3 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 BaseBdev4_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 [2024-11-27 14:17:34.305548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:03.851 [2024-11-27 14:17:34.305657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.851 [2024-11-27 14:17:34.305696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:03.851 [2024-11-27 14:17:34.305714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.851 [2024-11-27 14:17:34.309150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.851 [2024-11-27 14:17:34.309224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:03.851 BaseBdev4 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.851 spare_malloc 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.851 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.111 spare_delay 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.111 [2024-11-27 14:17:34.367624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.111 [2024-11-27 14:17:34.367734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.111 [2024-11-27 14:17:34.367760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:04.111 [2024-11-27 14:17:34.367777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.111 [2024-11-27 14:17:34.370689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.111 [2024-11-27 14:17:34.370879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.111 spare 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.111 [2024-11-27 14:17:34.379743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.111 [2024-11-27 14:17:34.382223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.111 [2024-11-27 14:17:34.382311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.111 [2024-11-27 14:17:34.382394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:04.111 [2024-11-27 14:17:34.382661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:04.111 [2024-11-27 14:17:34.382685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:04.111 [2024-11-27 14:17:34.383042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:04.111 [2024-11-27 14:17:34.383303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:04.111 [2024-11-27 14:17:34.383319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:04.111 [2024-11-27 14:17:34.383503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.111 "name": "raid_bdev1", 00:18:04.111 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:04.111 "strip_size_kb": 0, 00:18:04.111 "state": "online", 00:18:04.111 "raid_level": "raid1", 00:18:04.111 "superblock": true, 00:18:04.111 "num_base_bdevs": 4, 00:18:04.111 "num_base_bdevs_discovered": 4, 00:18:04.111 "num_base_bdevs_operational": 4, 00:18:04.111 "base_bdevs_list": [ 00:18:04.111 { 00:18:04.111 "name": "BaseBdev1", 00:18:04.111 "uuid": "fa8bbcb7-ca39-59be-9b89-deecaaf54c82", 00:18:04.111 "is_configured": true, 00:18:04.111 "data_offset": 2048, 00:18:04.111 "data_size": 63488 00:18:04.111 }, 00:18:04.111 { 00:18:04.111 "name": "BaseBdev2", 00:18:04.111 "uuid": "56582561-92b2-57c3-800f-213a4c27a920", 00:18:04.111 "is_configured": true, 00:18:04.111 "data_offset": 2048, 00:18:04.111 "data_size": 63488 00:18:04.111 }, 00:18:04.111 { 00:18:04.111 "name": "BaseBdev3", 00:18:04.111 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:04.111 "is_configured": true, 00:18:04.111 "data_offset": 2048, 00:18:04.111 "data_size": 63488 00:18:04.111 }, 00:18:04.111 { 00:18:04.111 "name": "BaseBdev4", 00:18:04.111 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:04.111 "is_configured": true, 00:18:04.111 "data_offset": 2048, 00:18:04.111 "data_size": 63488 00:18:04.111 } 00:18:04.111 ] 00:18:04.111 }' 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.111 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:04.679 [2024-11-27 14:17:34.916315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.679 14:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.679 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:04.938 [2024-11-27 14:17:35.280062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:04.938 /dev/nbd0 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.938 1+0 records in 00:18:04.938 1+0 records out 00:18:04.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365738 s, 11.2 MB/s 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:04.938 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:14.923 63488+0 records in 00:18:14.923 63488+0 records out 00:18:14.923 32505856 bytes (33 MB, 31 MiB) copied, 8.59709 s, 3.8 MB/s 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.923 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.923 [2024-11-27 14:17:44.225101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 [2024-11-27 14:17:44.254775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.923 "name": "raid_bdev1", 00:18:14.923 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:14.923 "strip_size_kb": 0, 00:18:14.923 "state": "online", 00:18:14.923 "raid_level": "raid1", 00:18:14.923 "superblock": true, 00:18:14.923 "num_base_bdevs": 4, 00:18:14.923 "num_base_bdevs_discovered": 3, 00:18:14.923 "num_base_bdevs_operational": 3, 00:18:14.923 "base_bdevs_list": [ 00:18:14.923 { 00:18:14.923 "name": null, 00:18:14.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.923 "is_configured": false, 00:18:14.923 "data_offset": 0, 00:18:14.923 "data_size": 63488 00:18:14.923 }, 00:18:14.923 { 00:18:14.923 "name": "BaseBdev2", 00:18:14.923 "uuid": "56582561-92b2-57c3-800f-213a4c27a920", 00:18:14.923 "is_configured": true, 00:18:14.923 "data_offset": 2048, 00:18:14.923 "data_size": 63488 00:18:14.923 }, 00:18:14.923 { 00:18:14.923 "name": "BaseBdev3", 00:18:14.923 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:14.923 "is_configured": true, 00:18:14.923 "data_offset": 2048, 00:18:14.923 "data_size": 63488 00:18:14.923 }, 00:18:14.923 { 00:18:14.923 "name": "BaseBdev4", 00:18:14.923 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:14.923 "is_configured": true, 00:18:14.923 "data_offset": 2048, 00:18:14.923 "data_size": 63488 00:18:14.923 } 00:18:14.923 ] 00:18:14.923 }' 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 [2024-11-27 14:17:44.766978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.923 [2024-11-27 14:17:44.781897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:14.923 [2024-11-27 14:17:44.784373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.492 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.492 "name": "raid_bdev1", 00:18:15.492 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:15.492 "strip_size_kb": 0, 00:18:15.492 "state": "online", 00:18:15.492 "raid_level": "raid1", 00:18:15.492 "superblock": true, 00:18:15.492 "num_base_bdevs": 4, 00:18:15.492 "num_base_bdevs_discovered": 4, 00:18:15.492 "num_base_bdevs_operational": 4, 00:18:15.492 "process": { 00:18:15.492 "type": "rebuild", 00:18:15.492 "target": "spare", 00:18:15.492 "progress": { 00:18:15.492 "blocks": 20480, 00:18:15.492 "percent": 32 00:18:15.492 } 00:18:15.492 }, 00:18:15.492 "base_bdevs_list": [ 00:18:15.492 { 00:18:15.492 "name": "spare", 00:18:15.492 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:15.492 "is_configured": true, 00:18:15.492 "data_offset": 2048, 00:18:15.492 "data_size": 63488 00:18:15.492 }, 00:18:15.492 { 00:18:15.492 "name": "BaseBdev2", 00:18:15.492 "uuid": "56582561-92b2-57c3-800f-213a4c27a920", 00:18:15.492 "is_configured": true, 00:18:15.492 "data_offset": 2048, 00:18:15.492 "data_size": 63488 00:18:15.492 }, 00:18:15.492 { 00:18:15.492 "name": "BaseBdev3", 00:18:15.492 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:15.492 "is_configured": true, 00:18:15.492 "data_offset": 2048, 00:18:15.492 "data_size": 63488 00:18:15.492 }, 00:18:15.492 { 00:18:15.492 "name": "BaseBdev4", 00:18:15.492 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:15.492 "is_configured": true, 00:18:15.493 "data_offset": 2048, 00:18:15.493 "data_size": 63488 00:18:15.493 } 00:18:15.493 ] 00:18:15.493 }' 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.493 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.493 [2024-11-27 14:17:45.950123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.493 [2024-11-27 14:17:45.994110] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.493 [2024-11-27 14:17:45.994199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.493 [2024-11-27 14:17:45.994233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.493 [2024-11-27 14:17:45.994248] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.752 "name": "raid_bdev1", 00:18:15.752 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:15.752 "strip_size_kb": 0, 00:18:15.752 "state": "online", 00:18:15.752 "raid_level": "raid1", 00:18:15.752 "superblock": true, 00:18:15.752 "num_base_bdevs": 4, 00:18:15.752 "num_base_bdevs_discovered": 3, 00:18:15.752 "num_base_bdevs_operational": 3, 00:18:15.752 "base_bdevs_list": [ 00:18:15.752 { 00:18:15.752 "name": null, 00:18:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.752 "is_configured": false, 00:18:15.752 "data_offset": 0, 00:18:15.752 "data_size": 63488 00:18:15.752 }, 00:18:15.752 { 00:18:15.752 "name": "BaseBdev2", 00:18:15.752 "uuid": "56582561-92b2-57c3-800f-213a4c27a920", 00:18:15.752 "is_configured": true, 00:18:15.752 "data_offset": 2048, 00:18:15.752 "data_size": 63488 00:18:15.752 }, 00:18:15.752 { 00:18:15.752 "name": "BaseBdev3", 00:18:15.752 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:15.752 "is_configured": true, 00:18:15.752 "data_offset": 2048, 00:18:15.752 "data_size": 63488 00:18:15.752 }, 00:18:15.752 { 00:18:15.752 "name": "BaseBdev4", 00:18:15.752 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:15.752 "is_configured": true, 00:18:15.752 "data_offset": 2048, 00:18:15.752 "data_size": 63488 00:18:15.752 } 00:18:15.752 ] 00:18:15.752 }' 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.752 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.011 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.011 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.269 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.270 "name": "raid_bdev1", 00:18:16.270 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:16.270 "strip_size_kb": 0, 00:18:16.270 "state": "online", 00:18:16.270 "raid_level": "raid1", 00:18:16.270 "superblock": true, 00:18:16.270 "num_base_bdevs": 4, 00:18:16.270 "num_base_bdevs_discovered": 3, 00:18:16.270 "num_base_bdevs_operational": 3, 00:18:16.270 "base_bdevs_list": [ 00:18:16.270 { 00:18:16.270 "name": null, 00:18:16.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.270 "is_configured": false, 00:18:16.270 "data_offset": 0, 00:18:16.270 "data_size": 63488 00:18:16.270 }, 00:18:16.270 { 00:18:16.270 "name": "BaseBdev2", 00:18:16.270 "uuid": "56582561-92b2-57c3-800f-213a4c27a920", 00:18:16.270 "is_configured": true, 00:18:16.270 "data_offset": 2048, 00:18:16.270 "data_size": 63488 00:18:16.270 }, 00:18:16.270 { 00:18:16.270 "name": "BaseBdev3", 00:18:16.270 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:16.270 "is_configured": true, 00:18:16.270 "data_offset": 2048, 00:18:16.270 "data_size": 63488 00:18:16.270 }, 00:18:16.270 { 00:18:16.270 "name": "BaseBdev4", 00:18:16.270 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:16.270 "is_configured": true, 00:18:16.270 "data_offset": 2048, 00:18:16.270 "data_size": 63488 00:18:16.270 } 00:18:16.270 ] 00:18:16.270 }' 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.270 [2024-11-27 14:17:46.686293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.270 [2024-11-27 14:17:46.699898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.270 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:16.270 [2024-11-27 14:17:46.702522] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.203 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.460 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.460 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.460 "name": "raid_bdev1", 00:18:17.460 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:17.460 "strip_size_kb": 0, 00:18:17.460 "state": "online", 00:18:17.460 "raid_level": "raid1", 00:18:17.460 "superblock": true, 00:18:17.460 "num_base_bdevs": 4, 00:18:17.460 "num_base_bdevs_discovered": 4, 00:18:17.460 "num_base_bdevs_operational": 4, 00:18:17.460 "process": { 00:18:17.460 "type": "rebuild", 00:18:17.460 "target": "spare", 00:18:17.460 "progress": { 00:18:17.460 "blocks": 20480, 00:18:17.460 "percent": 32 00:18:17.460 } 00:18:17.460 }, 00:18:17.461 "base_bdevs_list": [ 00:18:17.461 { 00:18:17.461 "name": "spare", 00:18:17.461 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:17.461 "is_configured": true, 00:18:17.461 "data_offset": 2048, 00:18:17.461 "data_size": 63488 00:18:17.461 }, 00:18:17.461 { 00:18:17.461 "name": "BaseBdev2", 00:18:17.461 "uuid": "56582561-92b2-57c3-800f-213a4c27a920", 00:18:17.461 "is_configured": true, 00:18:17.461 "data_offset": 2048, 00:18:17.461 "data_size": 63488 00:18:17.461 }, 00:18:17.461 { 00:18:17.461 "name": "BaseBdev3", 00:18:17.461 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:17.461 "is_configured": true, 00:18:17.461 "data_offset": 2048, 00:18:17.461 "data_size": 63488 00:18:17.461 }, 00:18:17.461 { 00:18:17.461 "name": "BaseBdev4", 00:18:17.461 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:17.461 "is_configured": true, 00:18:17.461 "data_offset": 2048, 00:18:17.461 "data_size": 63488 00:18:17.461 } 00:18:17.461 ] 00:18:17.461 }' 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:17.461 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.461 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.461 [2024-11-27 14:17:47.863783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.718 [2024-11-27 14:17:48.011961] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.718 "name": "raid_bdev1", 00:18:17.718 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:17.718 "strip_size_kb": 0, 00:18:17.718 "state": "online", 00:18:17.718 "raid_level": "raid1", 00:18:17.718 "superblock": true, 00:18:17.718 "num_base_bdevs": 4, 00:18:17.718 "num_base_bdevs_discovered": 3, 00:18:17.718 "num_base_bdevs_operational": 3, 00:18:17.718 "process": { 00:18:17.718 "type": "rebuild", 00:18:17.718 "target": "spare", 00:18:17.718 "progress": { 00:18:17.718 "blocks": 24576, 00:18:17.718 "percent": 38 00:18:17.718 } 00:18:17.718 }, 00:18:17.718 "base_bdevs_list": [ 00:18:17.718 { 00:18:17.718 "name": "spare", 00:18:17.718 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": null, 00:18:17.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.718 "is_configured": false, 00:18:17.718 "data_offset": 0, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev3", 00:18:17.718 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev4", 00:18:17.718 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 } 00:18:17.718 ] 00:18:17.718 }' 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=510 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.718 "name": "raid_bdev1", 00:18:17.718 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:17.718 "strip_size_kb": 0, 00:18:17.718 "state": "online", 00:18:17.718 "raid_level": "raid1", 00:18:17.718 "superblock": true, 00:18:17.718 "num_base_bdevs": 4, 00:18:17.718 "num_base_bdevs_discovered": 3, 00:18:17.718 "num_base_bdevs_operational": 3, 00:18:17.718 "process": { 00:18:17.718 "type": "rebuild", 00:18:17.718 "target": "spare", 00:18:17.718 "progress": { 00:18:17.718 "blocks": 26624, 00:18:17.718 "percent": 41 00:18:17.718 } 00:18:17.718 }, 00:18:17.718 "base_bdevs_list": [ 00:18:17.718 { 00:18:17.718 "name": "spare", 00:18:17.718 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": null, 00:18:17.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.718 "is_configured": false, 00:18:17.718 "data_offset": 0, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev3", 00:18:17.718 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 }, 00:18:17.718 { 00:18:17.718 "name": "BaseBdev4", 00:18:17.718 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:17.718 "is_configured": true, 00:18:17.718 "data_offset": 2048, 00:18:17.718 "data_size": 63488 00:18:17.718 } 00:18:17.718 ] 00:18:17.718 }' 00:18:17.718 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.974 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.975 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.975 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.975 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.910 "name": "raid_bdev1", 00:18:18.910 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:18.910 "strip_size_kb": 0, 00:18:18.910 "state": "online", 00:18:18.910 "raid_level": "raid1", 00:18:18.910 "superblock": true, 00:18:18.910 "num_base_bdevs": 4, 00:18:18.910 "num_base_bdevs_discovered": 3, 00:18:18.910 "num_base_bdevs_operational": 3, 00:18:18.910 "process": { 00:18:18.910 "type": "rebuild", 00:18:18.910 "target": "spare", 00:18:18.910 "progress": { 00:18:18.910 "blocks": 51200, 00:18:18.910 "percent": 80 00:18:18.910 } 00:18:18.910 }, 00:18:18.910 "base_bdevs_list": [ 00:18:18.910 { 00:18:18.910 "name": "spare", 00:18:18.910 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:18.910 "is_configured": true, 00:18:18.910 "data_offset": 2048, 00:18:18.910 "data_size": 63488 00:18:18.910 }, 00:18:18.910 { 00:18:18.910 "name": null, 00:18:18.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.910 "is_configured": false, 00:18:18.910 "data_offset": 0, 00:18:18.910 "data_size": 63488 00:18:18.910 }, 00:18:18.910 { 00:18:18.910 "name": "BaseBdev3", 00:18:18.910 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:18.910 "is_configured": true, 00:18:18.910 "data_offset": 2048, 00:18:18.910 "data_size": 63488 00:18:18.910 }, 00:18:18.910 { 00:18:18.910 "name": "BaseBdev4", 00:18:18.910 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:18.910 "is_configured": true, 00:18:18.910 "data_offset": 2048, 00:18:18.910 "data_size": 63488 00:18:18.910 } 00:18:18.910 ] 00:18:18.910 }' 00:18:18.910 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.167 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.167 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.167 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.167 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.424 [2024-11-27 14:17:49.926543] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:19.424 [2024-11-27 14:17:49.926648] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:19.424 [2024-11-27 14:17:49.926852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.991 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.249 "name": "raid_bdev1", 00:18:20.249 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:20.249 "strip_size_kb": 0, 00:18:20.249 "state": "online", 00:18:20.249 "raid_level": "raid1", 00:18:20.249 "superblock": true, 00:18:20.249 "num_base_bdevs": 4, 00:18:20.249 "num_base_bdevs_discovered": 3, 00:18:20.249 "num_base_bdevs_operational": 3, 00:18:20.249 "base_bdevs_list": [ 00:18:20.249 { 00:18:20.249 "name": "spare", 00:18:20.249 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:20.249 "is_configured": true, 00:18:20.249 "data_offset": 2048, 00:18:20.249 "data_size": 63488 00:18:20.249 }, 00:18:20.249 { 00:18:20.249 "name": null, 00:18:20.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.249 "is_configured": false, 00:18:20.249 "data_offset": 0, 00:18:20.249 "data_size": 63488 00:18:20.249 }, 00:18:20.249 { 00:18:20.249 "name": "BaseBdev3", 00:18:20.249 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:20.249 "is_configured": true, 00:18:20.249 "data_offset": 2048, 00:18:20.249 "data_size": 63488 00:18:20.249 }, 00:18:20.249 { 00:18:20.249 "name": "BaseBdev4", 00:18:20.249 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:20.249 "is_configured": true, 00:18:20.249 "data_offset": 2048, 00:18:20.249 "data_size": 63488 00:18:20.249 } 00:18:20.249 ] 00:18:20.249 }' 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.249 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.249 "name": "raid_bdev1", 00:18:20.249 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:20.249 "strip_size_kb": 0, 00:18:20.249 "state": "online", 00:18:20.249 "raid_level": "raid1", 00:18:20.249 "superblock": true, 00:18:20.249 "num_base_bdevs": 4, 00:18:20.249 "num_base_bdevs_discovered": 3, 00:18:20.249 "num_base_bdevs_operational": 3, 00:18:20.249 "base_bdevs_list": [ 00:18:20.249 { 00:18:20.249 "name": "spare", 00:18:20.249 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:20.249 "is_configured": true, 00:18:20.249 "data_offset": 2048, 00:18:20.249 "data_size": 63488 00:18:20.249 }, 00:18:20.249 { 00:18:20.249 "name": null, 00:18:20.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.250 "is_configured": false, 00:18:20.250 "data_offset": 0, 00:18:20.250 "data_size": 63488 00:18:20.250 }, 00:18:20.250 { 00:18:20.250 "name": "BaseBdev3", 00:18:20.250 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:20.250 "is_configured": true, 00:18:20.250 "data_offset": 2048, 00:18:20.250 "data_size": 63488 00:18:20.250 }, 00:18:20.250 { 00:18:20.250 "name": "BaseBdev4", 00:18:20.250 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:20.250 "is_configured": true, 00:18:20.250 "data_offset": 2048, 00:18:20.250 "data_size": 63488 00:18:20.250 } 00:18:20.250 ] 00:18:20.250 }' 00:18:20.250 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.250 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.250 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.508 "name": "raid_bdev1", 00:18:20.508 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:20.508 "strip_size_kb": 0, 00:18:20.508 "state": "online", 00:18:20.508 "raid_level": "raid1", 00:18:20.508 "superblock": true, 00:18:20.508 "num_base_bdevs": 4, 00:18:20.508 "num_base_bdevs_discovered": 3, 00:18:20.508 "num_base_bdevs_operational": 3, 00:18:20.508 "base_bdevs_list": [ 00:18:20.508 { 00:18:20.508 "name": "spare", 00:18:20.508 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:20.508 "is_configured": true, 00:18:20.508 "data_offset": 2048, 00:18:20.508 "data_size": 63488 00:18:20.508 }, 00:18:20.508 { 00:18:20.508 "name": null, 00:18:20.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.508 "is_configured": false, 00:18:20.508 "data_offset": 0, 00:18:20.508 "data_size": 63488 00:18:20.508 }, 00:18:20.508 { 00:18:20.508 "name": "BaseBdev3", 00:18:20.508 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:20.508 "is_configured": true, 00:18:20.508 "data_offset": 2048, 00:18:20.508 "data_size": 63488 00:18:20.508 }, 00:18:20.508 { 00:18:20.508 "name": "BaseBdev4", 00:18:20.508 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:20.508 "is_configured": true, 00:18:20.508 "data_offset": 2048, 00:18:20.508 "data_size": 63488 00:18:20.508 } 00:18:20.508 ] 00:18:20.508 }' 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.508 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.076 [2024-11-27 14:17:51.319321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.076 [2024-11-27 14:17:51.319359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.076 [2024-11-27 14:17:51.319462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.076 [2024-11-27 14:17:51.319577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.076 [2024-11-27 14:17:51.319599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.076 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:21.335 /dev/nbd0 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.335 1+0 records in 00:18:21.335 1+0 records out 00:18:21.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603659 s, 6.8 MB/s 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.335 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:21.594 /dev/nbd1 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.594 1+0 records in 00:18:21.594 1+0 records out 00:18:21.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434171 s, 9.4 MB/s 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.594 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:21.852 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:21.853 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.853 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.853 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.853 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:21.853 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.853 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.111 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.369 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.628 [2024-11-27 14:17:52.884234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.628 [2024-11-27 14:17:52.884298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.628 [2024-11-27 14:17:52.884332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:22.628 [2024-11-27 14:17:52.884347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.628 [2024-11-27 14:17:52.887356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.628 [2024-11-27 14:17:52.887401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.628 [2024-11-27 14:17:52.887515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:22.628 [2024-11-27 14:17:52.887590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.628 [2024-11-27 14:17:52.887767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.628 [2024-11-27 14:17:52.887913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:22.628 spare 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.628 [2024-11-27 14:17:52.988056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:22.628 [2024-11-27 14:17:52.988106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:22.628 [2024-11-27 14:17:52.988669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:22.628 [2024-11-27 14:17:52.989024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:22.628 [2024-11-27 14:17:52.989048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:22.628 [2024-11-27 14:17:52.989297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.628 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.628 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.628 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.628 "name": "raid_bdev1", 00:18:22.628 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:22.628 "strip_size_kb": 0, 00:18:22.628 "state": "online", 00:18:22.628 "raid_level": "raid1", 00:18:22.628 "superblock": true, 00:18:22.628 "num_base_bdevs": 4, 00:18:22.628 "num_base_bdevs_discovered": 3, 00:18:22.628 "num_base_bdevs_operational": 3, 00:18:22.628 "base_bdevs_list": [ 00:18:22.628 { 00:18:22.628 "name": "spare", 00:18:22.628 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:22.628 "is_configured": true, 00:18:22.628 "data_offset": 2048, 00:18:22.628 "data_size": 63488 00:18:22.628 }, 00:18:22.628 { 00:18:22.628 "name": null, 00:18:22.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.628 "is_configured": false, 00:18:22.628 "data_offset": 2048, 00:18:22.628 "data_size": 63488 00:18:22.628 }, 00:18:22.628 { 00:18:22.628 "name": "BaseBdev3", 00:18:22.628 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:22.628 "is_configured": true, 00:18:22.628 "data_offset": 2048, 00:18:22.628 "data_size": 63488 00:18:22.628 }, 00:18:22.628 { 00:18:22.628 "name": "BaseBdev4", 00:18:22.628 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:22.628 "is_configured": true, 00:18:22.628 "data_offset": 2048, 00:18:22.628 "data_size": 63488 00:18:22.628 } 00:18:22.628 ] 00:18:22.628 }' 00:18:22.628 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.628 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.205 "name": "raid_bdev1", 00:18:23.205 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:23.205 "strip_size_kb": 0, 00:18:23.205 "state": "online", 00:18:23.205 "raid_level": "raid1", 00:18:23.205 "superblock": true, 00:18:23.205 "num_base_bdevs": 4, 00:18:23.205 "num_base_bdevs_discovered": 3, 00:18:23.205 "num_base_bdevs_operational": 3, 00:18:23.205 "base_bdevs_list": [ 00:18:23.205 { 00:18:23.205 "name": "spare", 00:18:23.205 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:23.205 "is_configured": true, 00:18:23.205 "data_offset": 2048, 00:18:23.205 "data_size": 63488 00:18:23.205 }, 00:18:23.205 { 00:18:23.205 "name": null, 00:18:23.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.205 "is_configured": false, 00:18:23.205 "data_offset": 2048, 00:18:23.205 "data_size": 63488 00:18:23.205 }, 00:18:23.205 { 00:18:23.205 "name": "BaseBdev3", 00:18:23.205 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:23.205 "is_configured": true, 00:18:23.205 "data_offset": 2048, 00:18:23.205 "data_size": 63488 00:18:23.205 }, 00:18:23.205 { 00:18:23.205 "name": "BaseBdev4", 00:18:23.205 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:23.205 "is_configured": true, 00:18:23.205 "data_offset": 2048, 00:18:23.205 "data_size": 63488 00:18:23.205 } 00:18:23.205 ] 00:18:23.205 }' 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.205 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.465 [2024-11-27 14:17:53.721534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.465 "name": "raid_bdev1", 00:18:23.465 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:23.465 "strip_size_kb": 0, 00:18:23.465 "state": "online", 00:18:23.465 "raid_level": "raid1", 00:18:23.465 "superblock": true, 00:18:23.465 "num_base_bdevs": 4, 00:18:23.465 "num_base_bdevs_discovered": 2, 00:18:23.465 "num_base_bdevs_operational": 2, 00:18:23.465 "base_bdevs_list": [ 00:18:23.465 { 00:18:23.465 "name": null, 00:18:23.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.465 "is_configured": false, 00:18:23.465 "data_offset": 0, 00:18:23.465 "data_size": 63488 00:18:23.465 }, 00:18:23.465 { 00:18:23.465 "name": null, 00:18:23.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.465 "is_configured": false, 00:18:23.465 "data_offset": 2048, 00:18:23.465 "data_size": 63488 00:18:23.465 }, 00:18:23.465 { 00:18:23.465 "name": "BaseBdev3", 00:18:23.465 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:23.465 "is_configured": true, 00:18:23.465 "data_offset": 2048, 00:18:23.465 "data_size": 63488 00:18:23.465 }, 00:18:23.465 { 00:18:23.465 "name": "BaseBdev4", 00:18:23.465 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:23.465 "is_configured": true, 00:18:23.465 "data_offset": 2048, 00:18:23.465 "data_size": 63488 00:18:23.465 } 00:18:23.465 ] 00:18:23.465 }' 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.465 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.032 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.032 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.032 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.032 [2024-11-27 14:17:54.261742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.032 [2024-11-27 14:17:54.262104] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:24.032 [2024-11-27 14:17:54.262141] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:24.032 [2024-11-27 14:17:54.262191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.032 [2024-11-27 14:17:54.276434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:18:24.032 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.032 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:24.032 [2024-11-27 14:17:54.279028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.968 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.968 "name": "raid_bdev1", 00:18:24.969 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:24.969 "strip_size_kb": 0, 00:18:24.969 "state": "online", 00:18:24.969 "raid_level": "raid1", 00:18:24.969 "superblock": true, 00:18:24.969 "num_base_bdevs": 4, 00:18:24.969 "num_base_bdevs_discovered": 3, 00:18:24.969 "num_base_bdevs_operational": 3, 00:18:24.969 "process": { 00:18:24.969 "type": "rebuild", 00:18:24.969 "target": "spare", 00:18:24.969 "progress": { 00:18:24.969 "blocks": 20480, 00:18:24.969 "percent": 32 00:18:24.969 } 00:18:24.969 }, 00:18:24.969 "base_bdevs_list": [ 00:18:24.969 { 00:18:24.969 "name": "spare", 00:18:24.969 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:24.969 "is_configured": true, 00:18:24.969 "data_offset": 2048, 00:18:24.969 "data_size": 63488 00:18:24.969 }, 00:18:24.969 { 00:18:24.969 "name": null, 00:18:24.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.969 "is_configured": false, 00:18:24.969 "data_offset": 2048, 00:18:24.969 "data_size": 63488 00:18:24.969 }, 00:18:24.969 { 00:18:24.969 "name": "BaseBdev3", 00:18:24.969 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:24.969 "is_configured": true, 00:18:24.969 "data_offset": 2048, 00:18:24.969 "data_size": 63488 00:18:24.969 }, 00:18:24.969 { 00:18:24.969 "name": "BaseBdev4", 00:18:24.969 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:24.969 "is_configured": true, 00:18:24.969 "data_offset": 2048, 00:18:24.969 "data_size": 63488 00:18:24.969 } 00:18:24.969 ] 00:18:24.969 }' 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.969 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.969 [2024-11-27 14:17:55.440761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.227 [2024-11-27 14:17:55.488630] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:25.227 [2024-11-27 14:17:55.488755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.227 [2024-11-27 14:17:55.488784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.227 [2024-11-27 14:17:55.488795] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.228 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.228 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.228 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.228 "name": "raid_bdev1", 00:18:25.228 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:25.228 "strip_size_kb": 0, 00:18:25.228 "state": "online", 00:18:25.228 "raid_level": "raid1", 00:18:25.228 "superblock": true, 00:18:25.228 "num_base_bdevs": 4, 00:18:25.228 "num_base_bdevs_discovered": 2, 00:18:25.228 "num_base_bdevs_operational": 2, 00:18:25.228 "base_bdevs_list": [ 00:18:25.228 { 00:18:25.228 "name": null, 00:18:25.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.228 "is_configured": false, 00:18:25.228 "data_offset": 0, 00:18:25.228 "data_size": 63488 00:18:25.228 }, 00:18:25.228 { 00:18:25.228 "name": null, 00:18:25.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.228 "is_configured": false, 00:18:25.228 "data_offset": 2048, 00:18:25.228 "data_size": 63488 00:18:25.228 }, 00:18:25.228 { 00:18:25.228 "name": "BaseBdev3", 00:18:25.228 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:25.228 "is_configured": true, 00:18:25.228 "data_offset": 2048, 00:18:25.228 "data_size": 63488 00:18:25.228 }, 00:18:25.228 { 00:18:25.228 "name": "BaseBdev4", 00:18:25.228 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:25.228 "is_configured": true, 00:18:25.228 "data_offset": 2048, 00:18:25.228 "data_size": 63488 00:18:25.228 } 00:18:25.228 ] 00:18:25.228 }' 00:18:25.228 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.228 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.796 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.796 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.796 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.796 [2024-11-27 14:17:56.047138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.796 [2024-11-27 14:17:56.047221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.796 [2024-11-27 14:17:56.047264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:25.796 [2024-11-27 14:17:56.047280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.796 [2024-11-27 14:17:56.047906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.796 [2024-11-27 14:17:56.047953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.796 [2024-11-27 14:17:56.048097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:25.796 [2024-11-27 14:17:56.048118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:25.796 [2024-11-27 14:17:56.048138] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:25.796 [2024-11-27 14:17:56.048171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.796 [2024-11-27 14:17:56.062528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:18:25.796 spare 00:18:25.796 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.796 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:25.796 [2024-11-27 14:17:56.065539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.733 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.733 "name": "raid_bdev1", 00:18:26.733 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:26.733 "strip_size_kb": 0, 00:18:26.733 "state": "online", 00:18:26.733 "raid_level": "raid1", 00:18:26.733 "superblock": true, 00:18:26.733 "num_base_bdevs": 4, 00:18:26.733 "num_base_bdevs_discovered": 3, 00:18:26.733 "num_base_bdevs_operational": 3, 00:18:26.733 "process": { 00:18:26.733 "type": "rebuild", 00:18:26.733 "target": "spare", 00:18:26.733 "progress": { 00:18:26.733 "blocks": 20480, 00:18:26.733 "percent": 32 00:18:26.734 } 00:18:26.734 }, 00:18:26.734 "base_bdevs_list": [ 00:18:26.734 { 00:18:26.734 "name": "spare", 00:18:26.734 "uuid": "546da84a-f082-5a4c-bb37-1e4dccaa9b87", 00:18:26.734 "is_configured": true, 00:18:26.734 "data_offset": 2048, 00:18:26.734 "data_size": 63488 00:18:26.734 }, 00:18:26.734 { 00:18:26.734 "name": null, 00:18:26.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.734 "is_configured": false, 00:18:26.734 "data_offset": 2048, 00:18:26.734 "data_size": 63488 00:18:26.734 }, 00:18:26.734 { 00:18:26.734 "name": "BaseBdev3", 00:18:26.734 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:26.734 "is_configured": true, 00:18:26.734 "data_offset": 2048, 00:18:26.734 "data_size": 63488 00:18:26.734 }, 00:18:26.734 { 00:18:26.734 "name": "BaseBdev4", 00:18:26.734 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:26.734 "is_configured": true, 00:18:26.734 "data_offset": 2048, 00:18:26.734 "data_size": 63488 00:18:26.734 } 00:18:26.734 ] 00:18:26.734 }' 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.734 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.734 [2024-11-27 14:17:57.231526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.993 [2024-11-27 14:17:57.275453] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.993 [2024-11-27 14:17:57.275533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.993 [2024-11-27 14:17:57.275556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.993 [2024-11-27 14:17:57.275570] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.993 "name": "raid_bdev1", 00:18:26.993 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:26.993 "strip_size_kb": 0, 00:18:26.993 "state": "online", 00:18:26.993 "raid_level": "raid1", 00:18:26.993 "superblock": true, 00:18:26.993 "num_base_bdevs": 4, 00:18:26.993 "num_base_bdevs_discovered": 2, 00:18:26.993 "num_base_bdevs_operational": 2, 00:18:26.993 "base_bdevs_list": [ 00:18:26.993 { 00:18:26.993 "name": null, 00:18:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.993 "is_configured": false, 00:18:26.993 "data_offset": 0, 00:18:26.993 "data_size": 63488 00:18:26.993 }, 00:18:26.993 { 00:18:26.993 "name": null, 00:18:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.993 "is_configured": false, 00:18:26.993 "data_offset": 2048, 00:18:26.993 "data_size": 63488 00:18:26.993 }, 00:18:26.993 { 00:18:26.993 "name": "BaseBdev3", 00:18:26.993 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:26.993 "is_configured": true, 00:18:26.993 "data_offset": 2048, 00:18:26.993 "data_size": 63488 00:18:26.993 }, 00:18:26.993 { 00:18:26.993 "name": "BaseBdev4", 00:18:26.993 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:26.993 "is_configured": true, 00:18:26.993 "data_offset": 2048, 00:18:26.993 "data_size": 63488 00:18:26.993 } 00:18:26.993 ] 00:18:26.993 }' 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.993 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.561 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.561 "name": "raid_bdev1", 00:18:27.561 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:27.561 "strip_size_kb": 0, 00:18:27.561 "state": "online", 00:18:27.561 "raid_level": "raid1", 00:18:27.561 "superblock": true, 00:18:27.561 "num_base_bdevs": 4, 00:18:27.561 "num_base_bdevs_discovered": 2, 00:18:27.561 "num_base_bdevs_operational": 2, 00:18:27.561 "base_bdevs_list": [ 00:18:27.561 { 00:18:27.561 "name": null, 00:18:27.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.561 "is_configured": false, 00:18:27.561 "data_offset": 0, 00:18:27.561 "data_size": 63488 00:18:27.561 }, 00:18:27.561 { 00:18:27.561 "name": null, 00:18:27.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.562 "is_configured": false, 00:18:27.562 "data_offset": 2048, 00:18:27.562 "data_size": 63488 00:18:27.562 }, 00:18:27.562 { 00:18:27.562 "name": "BaseBdev3", 00:18:27.562 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:27.562 "is_configured": true, 00:18:27.562 "data_offset": 2048, 00:18:27.562 "data_size": 63488 00:18:27.562 }, 00:18:27.562 { 00:18:27.562 "name": "BaseBdev4", 00:18:27.562 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:27.562 "is_configured": true, 00:18:27.562 "data_offset": 2048, 00:18:27.562 "data_size": 63488 00:18:27.562 } 00:18:27.562 ] 00:18:27.562 }' 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.562 [2024-11-27 14:17:57.980207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:27.562 [2024-11-27 14:17:57.980286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.562 [2024-11-27 14:17:57.980316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:27.562 [2024-11-27 14:17:57.980334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.562 [2024-11-27 14:17:57.980955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.562 [2024-11-27 14:17:57.981001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.562 [2024-11-27 14:17:57.981100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:27.562 [2024-11-27 14:17:57.981126] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:27.562 [2024-11-27 14:17:57.981137] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:27.562 [2024-11-27 14:17:57.981165] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:27.562 BaseBdev1 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.562 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.497 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.498 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.498 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.757 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.757 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.757 "name": "raid_bdev1", 00:18:28.757 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:28.757 "strip_size_kb": 0, 00:18:28.757 "state": "online", 00:18:28.757 "raid_level": "raid1", 00:18:28.757 "superblock": true, 00:18:28.757 "num_base_bdevs": 4, 00:18:28.757 "num_base_bdevs_discovered": 2, 00:18:28.757 "num_base_bdevs_operational": 2, 00:18:28.757 "base_bdevs_list": [ 00:18:28.757 { 00:18:28.757 "name": null, 00:18:28.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.757 "is_configured": false, 00:18:28.757 "data_offset": 0, 00:18:28.757 "data_size": 63488 00:18:28.757 }, 00:18:28.757 { 00:18:28.757 "name": null, 00:18:28.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.757 "is_configured": false, 00:18:28.757 "data_offset": 2048, 00:18:28.757 "data_size": 63488 00:18:28.757 }, 00:18:28.757 { 00:18:28.757 "name": "BaseBdev3", 00:18:28.757 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:28.757 "is_configured": true, 00:18:28.757 "data_offset": 2048, 00:18:28.757 "data_size": 63488 00:18:28.757 }, 00:18:28.757 { 00:18:28.757 "name": "BaseBdev4", 00:18:28.757 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:28.757 "is_configured": true, 00:18:28.757 "data_offset": 2048, 00:18:28.757 "data_size": 63488 00:18:28.757 } 00:18:28.757 ] 00:18:28.757 }' 00:18:28.757 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.757 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.015 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.274 "name": "raid_bdev1", 00:18:29.274 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:29.274 "strip_size_kb": 0, 00:18:29.274 "state": "online", 00:18:29.274 "raid_level": "raid1", 00:18:29.274 "superblock": true, 00:18:29.274 "num_base_bdevs": 4, 00:18:29.274 "num_base_bdevs_discovered": 2, 00:18:29.274 "num_base_bdevs_operational": 2, 00:18:29.274 "base_bdevs_list": [ 00:18:29.274 { 00:18:29.274 "name": null, 00:18:29.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.274 "is_configured": false, 00:18:29.274 "data_offset": 0, 00:18:29.274 "data_size": 63488 00:18:29.274 }, 00:18:29.274 { 00:18:29.274 "name": null, 00:18:29.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.274 "is_configured": false, 00:18:29.274 "data_offset": 2048, 00:18:29.274 "data_size": 63488 00:18:29.274 }, 00:18:29.274 { 00:18:29.274 "name": "BaseBdev3", 00:18:29.274 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:29.274 "is_configured": true, 00:18:29.274 "data_offset": 2048, 00:18:29.274 "data_size": 63488 00:18:29.274 }, 00:18:29.274 { 00:18:29.274 "name": "BaseBdev4", 00:18:29.274 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:29.274 "is_configured": true, 00:18:29.274 "data_offset": 2048, 00:18:29.274 "data_size": 63488 00:18:29.274 } 00:18:29.274 ] 00:18:29.274 }' 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.274 [2024-11-27 14:17:59.680852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.274 [2024-11-27 14:17:59.681108] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:29.274 [2024-11-27 14:17:59.681129] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:29.274 request: 00:18:29.274 { 00:18:29.274 "base_bdev": "BaseBdev1", 00:18:29.274 "raid_bdev": "raid_bdev1", 00:18:29.274 "method": "bdev_raid_add_base_bdev", 00:18:29.274 "req_id": 1 00:18:29.274 } 00:18:29.274 Got JSON-RPC error response 00:18:29.274 response: 00:18:29.274 { 00:18:29.274 "code": -22, 00:18:29.274 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:29.274 } 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.274 14:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.210 14:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.469 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.469 "name": "raid_bdev1", 00:18:30.469 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:30.469 "strip_size_kb": 0, 00:18:30.469 "state": "online", 00:18:30.469 "raid_level": "raid1", 00:18:30.469 "superblock": true, 00:18:30.469 "num_base_bdevs": 4, 00:18:30.469 "num_base_bdevs_discovered": 2, 00:18:30.469 "num_base_bdevs_operational": 2, 00:18:30.469 "base_bdevs_list": [ 00:18:30.469 { 00:18:30.469 "name": null, 00:18:30.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.469 "is_configured": false, 00:18:30.469 "data_offset": 0, 00:18:30.469 "data_size": 63488 00:18:30.469 }, 00:18:30.469 { 00:18:30.469 "name": null, 00:18:30.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.469 "is_configured": false, 00:18:30.469 "data_offset": 2048, 00:18:30.469 "data_size": 63488 00:18:30.469 }, 00:18:30.469 { 00:18:30.469 "name": "BaseBdev3", 00:18:30.469 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:30.469 "is_configured": true, 00:18:30.469 "data_offset": 2048, 00:18:30.469 "data_size": 63488 00:18:30.469 }, 00:18:30.469 { 00:18:30.469 "name": "BaseBdev4", 00:18:30.469 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:30.469 "is_configured": true, 00:18:30.469 "data_offset": 2048, 00:18:30.469 "data_size": 63488 00:18:30.469 } 00:18:30.469 ] 00:18:30.469 }' 00:18:30.469 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.469 14:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.727 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.986 "name": "raid_bdev1", 00:18:30.986 "uuid": "cb1ddbd9-72a0-45b0-8cda-b48b6b93591c", 00:18:30.986 "strip_size_kb": 0, 00:18:30.986 "state": "online", 00:18:30.986 "raid_level": "raid1", 00:18:30.986 "superblock": true, 00:18:30.986 "num_base_bdevs": 4, 00:18:30.986 "num_base_bdevs_discovered": 2, 00:18:30.986 "num_base_bdevs_operational": 2, 00:18:30.986 "base_bdevs_list": [ 00:18:30.986 { 00:18:30.986 "name": null, 00:18:30.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.986 "is_configured": false, 00:18:30.986 "data_offset": 0, 00:18:30.986 "data_size": 63488 00:18:30.986 }, 00:18:30.986 { 00:18:30.986 "name": null, 00:18:30.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.986 "is_configured": false, 00:18:30.986 "data_offset": 2048, 00:18:30.986 "data_size": 63488 00:18:30.986 }, 00:18:30.986 { 00:18:30.986 "name": "BaseBdev3", 00:18:30.986 "uuid": "2304c05b-318d-5edb-ad84-f9b3901993c4", 00:18:30.986 "is_configured": true, 00:18:30.986 "data_offset": 2048, 00:18:30.986 "data_size": 63488 00:18:30.986 }, 00:18:30.986 { 00:18:30.986 "name": "BaseBdev4", 00:18:30.986 "uuid": "f05c2ca0-c85b-5921-926d-83dabf1de6bd", 00:18:30.986 "is_configured": true, 00:18:30.986 "data_offset": 2048, 00:18:30.986 "data_size": 63488 00:18:30.986 } 00:18:30.986 ] 00:18:30.986 }' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78484 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78484 ']' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78484 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78484 00:18:30.986 killing process with pid 78484 00:18:30.986 Received shutdown signal, test time was about 60.000000 seconds 00:18:30.986 00:18:30.986 Latency(us) 00:18:30.986 [2024-11-27T14:18:01.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.986 [2024-11-27T14:18:01.499Z] =================================================================================================================== 00:18:30.986 [2024-11-27T14:18:01.499Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78484' 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78484 00:18:30.986 [2024-11-27 14:18:01.424942] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.986 14:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78484 00:18:30.986 [2024-11-27 14:18:01.425092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.986 [2024-11-27 14:18:01.425194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.986 [2024-11-27 14:18:01.425210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:31.554 [2024-11-27 14:18:01.870649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.498 14:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:32.498 00:18:32.498 real 0m29.948s 00:18:32.498 user 0m36.179s 00:18:32.498 sys 0m4.313s 00:18:32.498 14:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.498 14:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.498 ************************************ 00:18:32.498 END TEST raid_rebuild_test_sb 00:18:32.498 ************************************ 00:18:32.498 14:18:03 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:18:32.498 14:18:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:32.498 14:18:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.498 14:18:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.758 ************************************ 00:18:32.758 START TEST raid_rebuild_test_io 00:18:32.758 ************************************ 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79288 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79288 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79288 ']' 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.758 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.758 [2024-11-27 14:18:03.135739] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:18:32.758 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:32.758 Zero copy mechanism will not be used. 00:18:32.758 [2024-11-27 14:18:03.136897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79288 ] 00:18:33.017 [2024-11-27 14:18:03.327094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.017 [2024-11-27 14:18:03.469226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.275 [2024-11-27 14:18:03.679284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.275 [2024-11-27 14:18:03.679350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.843 BaseBdev1_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.843 [2024-11-27 14:18:04.210038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:33.843 [2024-11-27 14:18:04.210293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.843 [2024-11-27 14:18:04.210491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.843 [2024-11-27 14:18:04.210683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.843 [2024-11-27 14:18:04.213977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.843 [2024-11-27 14:18:04.214055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.843 BaseBdev1 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.843 BaseBdev2_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.843 [2024-11-27 14:18:04.263618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:33.843 [2024-11-27 14:18:04.263712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.843 [2024-11-27 14:18:04.263746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:33.843 [2024-11-27 14:18:04.263764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.843 [2024-11-27 14:18:04.266794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.843 [2024-11-27 14:18:04.266872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:33.843 BaseBdev2 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.843 BaseBdev3_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.843 [2024-11-27 14:18:04.341213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:33.843 [2024-11-27 14:18:04.341300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.843 [2024-11-27 14:18:04.341348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:33.843 [2024-11-27 14:18:04.341382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.843 [2024-11-27 14:18:04.344610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.843 [2024-11-27 14:18:04.344658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:33.843 BaseBdev3 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.843 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 BaseBdev4_malloc 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 [2024-11-27 14:18:04.395460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:34.102 [2024-11-27 14:18:04.395702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.102 [2024-11-27 14:18:04.395768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:34.102 [2024-11-27 14:18:04.395847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.102 [2024-11-27 14:18:04.398931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.102 [2024-11-27 14:18:04.398984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:34.102 BaseBdev4 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 spare_malloc 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 spare_delay 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 [2024-11-27 14:18:04.457339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.102 [2024-11-27 14:18:04.457439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.102 [2024-11-27 14:18:04.457465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:34.102 [2024-11-27 14:18:04.457482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.102 [2024-11-27 14:18:04.460534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.102 [2024-11-27 14:18:04.460597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.102 spare 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.102 [2024-11-27 14:18:04.465470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.102 [2024-11-27 14:18:04.468218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.102 [2024-11-27 14:18:04.468435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.102 [2024-11-27 14:18:04.468631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.102 [2024-11-27 14:18:04.468884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:34.102 [2024-11-27 14:18:04.469052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:34.102 [2024-11-27 14:18:04.469569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:34.102 [2024-11-27 14:18:04.470040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:34.102 [2024-11-27 14:18:04.470198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:34.102 [2024-11-27 14:18:04.470705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.102 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.103 "name": "raid_bdev1", 00:18:34.103 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:34.103 "strip_size_kb": 0, 00:18:34.103 "state": "online", 00:18:34.103 "raid_level": "raid1", 00:18:34.103 "superblock": false, 00:18:34.103 "num_base_bdevs": 4, 00:18:34.103 "num_base_bdevs_discovered": 4, 00:18:34.103 "num_base_bdevs_operational": 4, 00:18:34.103 "base_bdevs_list": [ 00:18:34.103 { 00:18:34.103 "name": "BaseBdev1", 00:18:34.103 "uuid": "684ce0c8-3743-5341-9acf-ad834f72842e", 00:18:34.103 "is_configured": true, 00:18:34.103 "data_offset": 0, 00:18:34.103 "data_size": 65536 00:18:34.103 }, 00:18:34.103 { 00:18:34.103 "name": "BaseBdev2", 00:18:34.103 "uuid": "411a3bd5-3949-5edc-b3be-99cf5559d2fe", 00:18:34.103 "is_configured": true, 00:18:34.103 "data_offset": 0, 00:18:34.103 "data_size": 65536 00:18:34.103 }, 00:18:34.103 { 00:18:34.103 "name": "BaseBdev3", 00:18:34.103 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:34.103 "is_configured": true, 00:18:34.103 "data_offset": 0, 00:18:34.103 "data_size": 65536 00:18:34.103 }, 00:18:34.103 { 00:18:34.103 "name": "BaseBdev4", 00:18:34.103 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:34.103 "is_configured": true, 00:18:34.103 "data_offset": 0, 00:18:34.103 "data_size": 65536 00:18:34.103 } 00:18:34.103 ] 00:18:34.103 }' 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.103 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.671 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.671 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:34.671 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.671 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.671 [2024-11-27 14:18:05.011211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.672 [2024-11-27 14:18:05.114811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.672 "name": "raid_bdev1", 00:18:34.672 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:34.672 "strip_size_kb": 0, 00:18:34.672 "state": "online", 00:18:34.672 "raid_level": "raid1", 00:18:34.672 "superblock": false, 00:18:34.672 "num_base_bdevs": 4, 00:18:34.672 "num_base_bdevs_discovered": 3, 00:18:34.672 "num_base_bdevs_operational": 3, 00:18:34.672 "base_bdevs_list": [ 00:18:34.672 { 00:18:34.672 "name": null, 00:18:34.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.672 "is_configured": false, 00:18:34.672 "data_offset": 0, 00:18:34.672 "data_size": 65536 00:18:34.672 }, 00:18:34.672 { 00:18:34.672 "name": "BaseBdev2", 00:18:34.672 "uuid": "411a3bd5-3949-5edc-b3be-99cf5559d2fe", 00:18:34.672 "is_configured": true, 00:18:34.672 "data_offset": 0, 00:18:34.672 "data_size": 65536 00:18:34.672 }, 00:18:34.672 { 00:18:34.672 "name": "BaseBdev3", 00:18:34.672 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:34.672 "is_configured": true, 00:18:34.672 "data_offset": 0, 00:18:34.672 "data_size": 65536 00:18:34.672 }, 00:18:34.672 { 00:18:34.672 "name": "BaseBdev4", 00:18:34.672 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:34.672 "is_configured": true, 00:18:34.672 "data_offset": 0, 00:18:34.672 "data_size": 65536 00:18:34.672 } 00:18:34.672 ] 00:18:34.672 }' 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.672 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.930 [2024-11-27 14:18:05.255115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:34.930 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:34.930 Zero copy mechanism will not be used. 00:18:34.930 Running I/O for 60 seconds... 00:18:35.189 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.189 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.189 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.189 [2024-11-27 14:18:05.628352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.189 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.189 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:35.189 [2024-11-27 14:18:05.678009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:35.189 [2024-11-27 14:18:05.680990] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.448 [2024-11-27 14:18:05.810155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:35.448 [2024-11-27 14:18:05.811954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:35.706 [2024-11-27 14:18:06.053281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:35.706 [2024-11-27 14:18:06.054463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:35.965 114.00 IOPS, 342.00 MiB/s [2024-11-27T14:18:06.478Z] [2024-11-27 14:18:06.437805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:36.223 [2024-11-27 14:18:06.661288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:36.223 [2024-11-27 14:18:06.662036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.223 "name": "raid_bdev1", 00:18:36.223 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:36.223 "strip_size_kb": 0, 00:18:36.223 "state": "online", 00:18:36.223 "raid_level": "raid1", 00:18:36.223 "superblock": false, 00:18:36.223 "num_base_bdevs": 4, 00:18:36.223 "num_base_bdevs_discovered": 4, 00:18:36.223 "num_base_bdevs_operational": 4, 00:18:36.223 "process": { 00:18:36.223 "type": "rebuild", 00:18:36.223 "target": "spare", 00:18:36.223 "progress": { 00:18:36.223 "blocks": 10240, 00:18:36.223 "percent": 15 00:18:36.223 } 00:18:36.223 }, 00:18:36.223 "base_bdevs_list": [ 00:18:36.223 { 00:18:36.223 "name": "spare", 00:18:36.223 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:36.223 "is_configured": true, 00:18:36.223 "data_offset": 0, 00:18:36.223 "data_size": 65536 00:18:36.223 }, 00:18:36.223 { 00:18:36.223 "name": "BaseBdev2", 00:18:36.223 "uuid": "411a3bd5-3949-5edc-b3be-99cf5559d2fe", 00:18:36.223 "is_configured": true, 00:18:36.223 "data_offset": 0, 00:18:36.223 "data_size": 65536 00:18:36.223 }, 00:18:36.223 { 00:18:36.223 "name": "BaseBdev3", 00:18:36.223 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:36.223 "is_configured": true, 00:18:36.223 "data_offset": 0, 00:18:36.223 "data_size": 65536 00:18:36.223 }, 00:18:36.223 { 00:18:36.223 "name": "BaseBdev4", 00:18:36.223 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:36.223 "is_configured": true, 00:18:36.223 "data_offset": 0, 00:18:36.223 "data_size": 65536 00:18:36.223 } 00:18:36.223 ] 00:18:36.223 }' 00:18:36.223 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:36.481 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.481 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.481 [2024-11-27 14:18:06.850406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.739 [2024-11-27 14:18:06.992294] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:36.739 [2024-11-27 14:18:07.016612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.739 [2024-11-27 14:18:07.016892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.739 [2024-11-27 14:18:07.016963] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:36.739 [2024-11-27 14:18:07.041163] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.739 "name": "raid_bdev1", 00:18:36.739 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:36.739 "strip_size_kb": 0, 00:18:36.739 "state": "online", 00:18:36.739 "raid_level": "raid1", 00:18:36.739 "superblock": false, 00:18:36.739 "num_base_bdevs": 4, 00:18:36.739 "num_base_bdevs_discovered": 3, 00:18:36.739 "num_base_bdevs_operational": 3, 00:18:36.739 "base_bdevs_list": [ 00:18:36.739 { 00:18:36.739 "name": null, 00:18:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.739 "is_configured": false, 00:18:36.739 "data_offset": 0, 00:18:36.739 "data_size": 65536 00:18:36.739 }, 00:18:36.739 { 00:18:36.739 "name": "BaseBdev2", 00:18:36.739 "uuid": "411a3bd5-3949-5edc-b3be-99cf5559d2fe", 00:18:36.739 "is_configured": true, 00:18:36.739 "data_offset": 0, 00:18:36.739 "data_size": 65536 00:18:36.739 }, 00:18:36.739 { 00:18:36.739 "name": "BaseBdev3", 00:18:36.739 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:36.739 "is_configured": true, 00:18:36.739 "data_offset": 0, 00:18:36.739 "data_size": 65536 00:18:36.739 }, 00:18:36.739 { 00:18:36.739 "name": "BaseBdev4", 00:18:36.739 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:36.739 "is_configured": true, 00:18:36.739 "data_offset": 0, 00:18:36.739 "data_size": 65536 00:18:36.739 } 00:18:36.739 ] 00:18:36.739 }' 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.739 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 94.50 IOPS, 283.50 MiB/s [2024-11-27T14:18:07.770Z] 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.257 "name": "raid_bdev1", 00:18:37.257 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:37.257 "strip_size_kb": 0, 00:18:37.257 "state": "online", 00:18:37.257 "raid_level": "raid1", 00:18:37.257 "superblock": false, 00:18:37.257 "num_base_bdevs": 4, 00:18:37.257 "num_base_bdevs_discovered": 3, 00:18:37.257 "num_base_bdevs_operational": 3, 00:18:37.257 "base_bdevs_list": [ 00:18:37.257 { 00:18:37.257 "name": null, 00:18:37.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.257 "is_configured": false, 00:18:37.257 "data_offset": 0, 00:18:37.257 "data_size": 65536 00:18:37.257 }, 00:18:37.257 { 00:18:37.257 "name": "BaseBdev2", 00:18:37.257 "uuid": "411a3bd5-3949-5edc-b3be-99cf5559d2fe", 00:18:37.257 "is_configured": true, 00:18:37.257 "data_offset": 0, 00:18:37.257 "data_size": 65536 00:18:37.257 }, 00:18:37.257 { 00:18:37.257 "name": "BaseBdev3", 00:18:37.257 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:37.257 "is_configured": true, 00:18:37.257 "data_offset": 0, 00:18:37.257 "data_size": 65536 00:18:37.257 }, 00:18:37.257 { 00:18:37.257 "name": "BaseBdev4", 00:18:37.257 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:37.257 "is_configured": true, 00:18:37.257 "data_offset": 0, 00:18:37.257 "data_size": 65536 00:18:37.257 } 00:18:37.257 ] 00:18:37.257 }' 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.257 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.515 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.515 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:37.515 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.515 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.515 [2024-11-27 14:18:07.820709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.515 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.515 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:37.515 [2024-11-27 14:18:07.888351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:37.515 [2024-11-27 14:18:07.891317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.775 [2024-11-27 14:18:08.034756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:37.775 [2024-11-27 14:18:08.036612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:38.343 119.33 IOPS, 358.00 MiB/s [2024-11-27T14:18:08.856Z] [2024-11-27 14:18:08.588087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:38.343 [2024-11-27 14:18:08.721679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.602 "name": "raid_bdev1", 00:18:38.602 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:38.602 "strip_size_kb": 0, 00:18:38.602 "state": "online", 00:18:38.602 "raid_level": "raid1", 00:18:38.602 "superblock": false, 00:18:38.602 "num_base_bdevs": 4, 00:18:38.602 "num_base_bdevs_discovered": 4, 00:18:38.602 "num_base_bdevs_operational": 4, 00:18:38.602 "process": { 00:18:38.602 "type": "rebuild", 00:18:38.602 "target": "spare", 00:18:38.602 "progress": { 00:18:38.602 "blocks": 12288, 00:18:38.602 "percent": 18 00:18:38.602 } 00:18:38.602 }, 00:18:38.602 "base_bdevs_list": [ 00:18:38.602 { 00:18:38.602 "name": "spare", 00:18:38.602 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:38.602 "is_configured": true, 00:18:38.602 "data_offset": 0, 00:18:38.602 "data_size": 65536 00:18:38.602 }, 00:18:38.602 { 00:18:38.602 "name": "BaseBdev2", 00:18:38.602 "uuid": "411a3bd5-3949-5edc-b3be-99cf5559d2fe", 00:18:38.602 "is_configured": true, 00:18:38.602 "data_offset": 0, 00:18:38.602 "data_size": 65536 00:18:38.602 }, 00:18:38.602 { 00:18:38.602 "name": "BaseBdev3", 00:18:38.602 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:38.602 "is_configured": true, 00:18:38.602 "data_offset": 0, 00:18:38.602 "data_size": 65536 00:18:38.602 }, 00:18:38.602 { 00:18:38.602 "name": "BaseBdev4", 00:18:38.602 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:38.602 "is_configured": true, 00:18:38.602 "data_offset": 0, 00:18:38.602 "data_size": 65536 00:18:38.602 } 00:18:38.602 ] 00:18:38.602 }' 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.602 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.602 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.602 [2024-11-27 14:18:09.055558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.602 [2024-11-27 14:18:09.107653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:38.861 [2024-11-27 14:18:09.219411] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:38.861 [2024-11-27 14:18:09.219454] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.861 111.00 IOPS, 333.00 MiB/s [2024-11-27T14:18:09.374Z] 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.861 "name": "raid_bdev1", 00:18:38.861 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:38.861 "strip_size_kb": 0, 00:18:38.861 "state": "online", 00:18:38.861 "raid_level": "raid1", 00:18:38.861 "superblock": false, 00:18:38.861 "num_base_bdevs": 4, 00:18:38.861 "num_base_bdevs_discovered": 3, 00:18:38.861 "num_base_bdevs_operational": 3, 00:18:38.861 "process": { 00:18:38.861 "type": "rebuild", 00:18:38.861 "target": "spare", 00:18:38.861 "progress": { 00:18:38.861 "blocks": 16384, 00:18:38.861 "percent": 25 00:18:38.861 } 00:18:38.861 }, 00:18:38.861 "base_bdevs_list": [ 00:18:38.861 { 00:18:38.861 "name": "spare", 00:18:38.861 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:38.861 "is_configured": true, 00:18:38.861 "data_offset": 0, 00:18:38.861 "data_size": 65536 00:18:38.861 }, 00:18:38.861 { 00:18:38.861 "name": null, 00:18:38.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.861 "is_configured": false, 00:18:38.861 "data_offset": 0, 00:18:38.861 "data_size": 65536 00:18:38.861 }, 00:18:38.861 { 00:18:38.861 "name": "BaseBdev3", 00:18:38.861 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:38.861 "is_configured": true, 00:18:38.861 "data_offset": 0, 00:18:38.861 "data_size": 65536 00:18:38.861 }, 00:18:38.861 { 00:18:38.861 "name": "BaseBdev4", 00:18:38.861 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:38.861 "is_configured": true, 00:18:38.861 "data_offset": 0, 00:18:38.861 "data_size": 65536 00:18:38.861 } 00:18:38.861 ] 00:18:38.861 }' 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.861 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=531 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.120 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.120 "name": "raid_bdev1", 00:18:39.120 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:39.120 "strip_size_kb": 0, 00:18:39.120 "state": "online", 00:18:39.120 "raid_level": "raid1", 00:18:39.120 "superblock": false, 00:18:39.120 "num_base_bdevs": 4, 00:18:39.120 "num_base_bdevs_discovered": 3, 00:18:39.120 "num_base_bdevs_operational": 3, 00:18:39.120 "process": { 00:18:39.120 "type": "rebuild", 00:18:39.120 "target": "spare", 00:18:39.120 "progress": { 00:18:39.120 "blocks": 18432, 00:18:39.120 "percent": 28 00:18:39.120 } 00:18:39.120 }, 00:18:39.120 "base_bdevs_list": [ 00:18:39.120 { 00:18:39.120 "name": "spare", 00:18:39.120 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:39.120 "is_configured": true, 00:18:39.120 "data_offset": 0, 00:18:39.120 "data_size": 65536 00:18:39.120 }, 00:18:39.120 { 00:18:39.120 "name": null, 00:18:39.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.120 "is_configured": false, 00:18:39.120 "data_offset": 0, 00:18:39.120 "data_size": 65536 00:18:39.121 }, 00:18:39.121 { 00:18:39.121 "name": "BaseBdev3", 00:18:39.121 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:39.121 "is_configured": true, 00:18:39.121 "data_offset": 0, 00:18:39.121 "data_size": 65536 00:18:39.121 }, 00:18:39.121 { 00:18:39.121 "name": "BaseBdev4", 00:18:39.121 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:39.121 "is_configured": true, 00:18:39.121 "data_offset": 0, 00:18:39.121 "data_size": 65536 00:18:39.121 } 00:18:39.121 ] 00:18:39.121 }' 00:18:39.121 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.121 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.121 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.121 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.121 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.379 [2024-11-27 14:18:09.875207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:39.638 [2024-11-27 14:18:10.103846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:40.155 95.80 IOPS, 287.40 MiB/s [2024-11-27T14:18:10.668Z] [2024-11-27 14:18:10.457489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:40.155 [2024-11-27 14:18:10.458146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.155 "name": "raid_bdev1", 00:18:40.155 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:40.155 "strip_size_kb": 0, 00:18:40.155 "state": "online", 00:18:40.155 "raid_level": "raid1", 00:18:40.155 "superblock": false, 00:18:40.155 "num_base_bdevs": 4, 00:18:40.155 "num_base_bdevs_discovered": 3, 00:18:40.155 "num_base_bdevs_operational": 3, 00:18:40.155 "process": { 00:18:40.155 "type": "rebuild", 00:18:40.155 "target": "spare", 00:18:40.155 "progress": { 00:18:40.155 "blocks": 32768, 00:18:40.155 "percent": 50 00:18:40.155 } 00:18:40.155 }, 00:18:40.155 "base_bdevs_list": [ 00:18:40.155 { 00:18:40.155 "name": "spare", 00:18:40.155 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:40.155 "is_configured": true, 00:18:40.155 "data_offset": 0, 00:18:40.155 "data_size": 65536 00:18:40.155 }, 00:18:40.155 { 00:18:40.155 "name": null, 00:18:40.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.155 "is_configured": false, 00:18:40.155 "data_offset": 0, 00:18:40.155 "data_size": 65536 00:18:40.155 }, 00:18:40.155 { 00:18:40.155 "name": "BaseBdev3", 00:18:40.155 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:40.155 "is_configured": true, 00:18:40.155 "data_offset": 0, 00:18:40.155 "data_size": 65536 00:18:40.155 }, 00:18:40.155 { 00:18:40.155 "name": "BaseBdev4", 00:18:40.155 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:40.155 "is_configured": true, 00:18:40.155 "data_offset": 0, 00:18:40.155 "data_size": 65536 00:18:40.155 } 00:18:40.155 ] 00:18:40.155 }' 00:18:40.155 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.414 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.414 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.414 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.414 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.414 [2024-11-27 14:18:10.857560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:40.673 [2024-11-27 14:18:11.080619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:41.008 87.17 IOPS, 261.50 MiB/s [2024-11-27T14:18:11.521Z] [2024-11-27 14:18:11.345669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:41.284 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.284 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.284 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.284 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.285 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.543 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.543 "name": "raid_bdev1", 00:18:41.543 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:41.543 "strip_size_kb": 0, 00:18:41.543 "state": "online", 00:18:41.543 "raid_level": "raid1", 00:18:41.543 "superblock": false, 00:18:41.543 "num_base_bdevs": 4, 00:18:41.543 "num_base_bdevs_discovered": 3, 00:18:41.543 "num_base_bdevs_operational": 3, 00:18:41.543 "process": { 00:18:41.543 "type": "rebuild", 00:18:41.543 "target": "spare", 00:18:41.543 "progress": { 00:18:41.543 "blocks": 51200, 00:18:41.543 "percent": 78 00:18:41.543 } 00:18:41.543 }, 00:18:41.543 "base_bdevs_list": [ 00:18:41.543 { 00:18:41.543 "name": "spare", 00:18:41.543 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:41.543 "is_configured": true, 00:18:41.543 "data_offset": 0, 00:18:41.543 "data_size": 65536 00:18:41.543 }, 00:18:41.543 { 00:18:41.543 "name": null, 00:18:41.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.543 "is_configured": false, 00:18:41.543 "data_offset": 0, 00:18:41.543 "data_size": 65536 00:18:41.543 }, 00:18:41.543 { 00:18:41.543 "name": "BaseBdev3", 00:18:41.543 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:41.543 "is_configured": true, 00:18:41.543 "data_offset": 0, 00:18:41.543 "data_size": 65536 00:18:41.543 }, 00:18:41.543 { 00:18:41.543 "name": "BaseBdev4", 00:18:41.543 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:41.543 "is_configured": true, 00:18:41.543 "data_offset": 0, 00:18:41.543 "data_size": 65536 00:18:41.543 } 00:18:41.543 ] 00:18:41.543 }' 00:18:41.543 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.543 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.543 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.543 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.543 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.062 79.29 IOPS, 237.86 MiB/s [2024-11-27T14:18:12.575Z] [2024-11-27 14:18:12.494218] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:42.320 [2024-11-27 14:18:12.602773] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:42.320 [2024-11-27 14:18:12.606613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.578 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.578 "name": "raid_bdev1", 00:18:42.578 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:42.578 "strip_size_kb": 0, 00:18:42.578 "state": "online", 00:18:42.578 "raid_level": "raid1", 00:18:42.578 "superblock": false, 00:18:42.578 "num_base_bdevs": 4, 00:18:42.578 "num_base_bdevs_discovered": 3, 00:18:42.578 "num_base_bdevs_operational": 3, 00:18:42.578 "base_bdevs_list": [ 00:18:42.578 { 00:18:42.578 "name": "spare", 00:18:42.578 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:42.578 "is_configured": true, 00:18:42.578 "data_offset": 0, 00:18:42.578 "data_size": 65536 00:18:42.578 }, 00:18:42.578 { 00:18:42.578 "name": null, 00:18:42.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.578 "is_configured": false, 00:18:42.578 "data_offset": 0, 00:18:42.579 "data_size": 65536 00:18:42.579 }, 00:18:42.579 { 00:18:42.579 "name": "BaseBdev3", 00:18:42.579 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:42.579 "is_configured": true, 00:18:42.579 "data_offset": 0, 00:18:42.579 "data_size": 65536 00:18:42.579 }, 00:18:42.579 { 00:18:42.579 "name": "BaseBdev4", 00:18:42.579 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:42.579 "is_configured": true, 00:18:42.579 "data_offset": 0, 00:18:42.579 "data_size": 65536 00:18:42.579 } 00:18:42.579 ] 00:18:42.579 }' 00:18:42.579 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.579 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.837 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.837 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.837 "name": "raid_bdev1", 00:18:42.837 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:42.837 "strip_size_kb": 0, 00:18:42.837 "state": "online", 00:18:42.837 "raid_level": "raid1", 00:18:42.837 "superblock": false, 00:18:42.837 "num_base_bdevs": 4, 00:18:42.837 "num_base_bdevs_discovered": 3, 00:18:42.837 "num_base_bdevs_operational": 3, 00:18:42.837 "base_bdevs_list": [ 00:18:42.837 { 00:18:42.837 "name": "spare", 00:18:42.837 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:42.837 "is_configured": true, 00:18:42.837 "data_offset": 0, 00:18:42.837 "data_size": 65536 00:18:42.837 }, 00:18:42.837 { 00:18:42.837 "name": null, 00:18:42.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.837 "is_configured": false, 00:18:42.837 "data_offset": 0, 00:18:42.837 "data_size": 65536 00:18:42.837 }, 00:18:42.837 { 00:18:42.837 "name": "BaseBdev3", 00:18:42.837 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:42.837 "is_configured": true, 00:18:42.837 "data_offset": 0, 00:18:42.837 "data_size": 65536 00:18:42.837 }, 00:18:42.837 { 00:18:42.838 "name": "BaseBdev4", 00:18:42.838 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:42.838 "is_configured": true, 00:18:42.838 "data_offset": 0, 00:18:42.838 "data_size": 65536 00:18:42.838 } 00:18:42.838 ] 00:18:42.838 }' 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.838 74.38 IOPS, 223.12 MiB/s [2024-11-27T14:18:13.351Z] 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.838 "name": "raid_bdev1", 00:18:42.838 "uuid": "c5b6e7ae-6a7b-49a8-941a-3cf44e7dec05", 00:18:42.838 "strip_size_kb": 0, 00:18:42.838 "state": "online", 00:18:42.838 "raid_level": "raid1", 00:18:42.838 "superblock": false, 00:18:42.838 "num_base_bdevs": 4, 00:18:42.838 "num_base_bdevs_discovered": 3, 00:18:42.838 "num_base_bdevs_operational": 3, 00:18:42.838 "base_bdevs_list": [ 00:18:42.838 { 00:18:42.838 "name": "spare", 00:18:42.838 "uuid": "a6946d5f-3b4f-5f17-a186-5990c653d537", 00:18:42.838 "is_configured": true, 00:18:42.838 "data_offset": 0, 00:18:42.838 "data_size": 65536 00:18:42.838 }, 00:18:42.838 { 00:18:42.838 "name": null, 00:18:42.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.838 "is_configured": false, 00:18:42.838 "data_offset": 0, 00:18:42.838 "data_size": 65536 00:18:42.838 }, 00:18:42.838 { 00:18:42.838 "name": "BaseBdev3", 00:18:42.838 "uuid": "33764452-394c-5da3-be59-27be5b446e65", 00:18:42.838 "is_configured": true, 00:18:42.838 "data_offset": 0, 00:18:42.838 "data_size": 65536 00:18:42.838 }, 00:18:42.838 { 00:18:42.838 "name": "BaseBdev4", 00:18:42.838 "uuid": "704f0513-f698-5a06-a7f3-b2d50c15cde0", 00:18:42.838 "is_configured": true, 00:18:42.838 "data_offset": 0, 00:18:42.838 "data_size": 65536 00:18:42.838 } 00:18:42.838 ] 00:18:42.838 }' 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.838 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.405 [2024-11-27 14:18:13.786537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.405 [2024-11-27 14:18:13.786570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.405 00:18:43.405 Latency(us) 00:18:43.405 [2024-11-27T14:18:13.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.405 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:43.405 raid_bdev1 : 8.59 71.02 213.07 0.00 0.00 20629.19 296.03 123922.62 00:18:43.405 [2024-11-27T14:18:13.918Z] =================================================================================================================== 00:18:43.405 [2024-11-27T14:18:13.918Z] Total : 71.02 213.07 0.00 0.00 20629.19 296.03 123922.62 00:18:43.405 { 00:18:43.405 "results": [ 00:18:43.405 { 00:18:43.405 "job": "raid_bdev1", 00:18:43.405 "core_mask": "0x1", 00:18:43.405 "workload": "randrw", 00:18:43.405 "percentage": 50, 00:18:43.405 "status": "finished", 00:18:43.405 "queue_depth": 2, 00:18:43.405 "io_size": 3145728, 00:18:43.405 "runtime": 8.588561, 00:18:43.405 "iops": 71.02470367271071, 00:18:43.405 "mibps": 213.07411101813216, 00:18:43.405 "io_failed": 0, 00:18:43.405 "io_timeout": 0, 00:18:43.405 "avg_latency_us": 20629.189627421758, 00:18:43.405 "min_latency_us": 296.0290909090909, 00:18:43.405 "max_latency_us": 123922.61818181818 00:18:43.405 } 00:18:43.405 ], 00:18:43.405 "core_count": 1 00:18:43.405 } 00:18:43.405 [2024-11-27 14:18:13.867870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.405 [2024-11-27 14:18:13.867962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.405 [2024-11-27 14:18:13.868200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.405 [2024-11-27 14:18:13.868216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.405 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:43.664 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:43.923 /dev/nbd0 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:43.923 1+0 records in 00:18:43.923 1+0 records out 00:18:43.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355961 s, 11.5 MB/s 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:43.923 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:43.924 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:44.182 /dev/nbd1 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.182 1+0 records in 00:18:44.182 1+0 records out 00:18:44.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311559 s, 13.1 MB/s 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.182 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.441 14:18:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.699 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:44.957 /dev/nbd1 00:18:44.957 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:44.957 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.958 1+0 records in 00:18:44.958 1+0 records out 00:18:44.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335222 s, 12.2 MB/s 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.958 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.215 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.474 14:18:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79288 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79288 ']' 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79288 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79288 00:18:45.733 killing process with pid 79288 00:18:45.733 Received shutdown signal, test time was about 10.983331 seconds 00:18:45.733 00:18:45.733 Latency(us) 00:18:45.733 [2024-11-27T14:18:16.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.733 [2024-11-27T14:18:16.246Z] =================================================================================================================== 00:18:45.733 [2024-11-27T14:18:16.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79288' 00:18:45.733 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79288 00:18:45.733 [2024-11-27 14:18:16.241188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:45.734 14:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79288 00:18:46.300 [2024-11-27 14:18:16.615966] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.237 14:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:47.237 ************************************ 00:18:47.237 END TEST raid_rebuild_test_io 00:18:47.237 ************************************ 00:18:47.237 00:18:47.237 real 0m14.694s 00:18:47.237 user 0m19.468s 00:18:47.237 sys 0m1.903s 00:18:47.237 14:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.237 14:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.496 14:18:17 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:18:47.496 14:18:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:47.496 14:18:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.496 14:18:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.496 ************************************ 00:18:47.496 START TEST raid_rebuild_test_sb_io 00:18:47.496 ************************************ 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79710 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79710 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79710 ']' 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.496 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.496 [2024-11-27 14:18:17.863974] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:18:47.496 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:47.496 Zero copy mechanism will not be used. 00:18:47.496 [2024-11-27 14:18:17.864157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79710 ] 00:18:47.754 [2024-11-27 14:18:18.039199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.754 [2024-11-27 14:18:18.171320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.013 [2024-11-27 14:18:18.370139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.013 [2024-11-27 14:18:18.370188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.581 BaseBdev1_malloc 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.581 [2024-11-27 14:18:18.894620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:48.581 [2024-11-27 14:18:18.894701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.581 [2024-11-27 14:18:18.894731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:48.581 [2024-11-27 14:18:18.894749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.581 [2024-11-27 14:18:18.897420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.581 [2024-11-27 14:18:18.897476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:48.581 BaseBdev1 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.581 BaseBdev2_malloc 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.581 [2024-11-27 14:18:18.946290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:48.581 [2024-11-27 14:18:18.946365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.581 [2024-11-27 14:18:18.946398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:48.581 [2024-11-27 14:18:18.946430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.581 [2024-11-27 14:18:18.949233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.581 [2024-11-27 14:18:18.949290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:48.581 BaseBdev2 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.581 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.581 BaseBdev3_malloc 00:18:48.582 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.582 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:48.582 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.582 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.582 [2024-11-27 14:18:19.004560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:48.582 [2024-11-27 14:18:19.004634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.582 [2024-11-27 14:18:19.004662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:48.582 [2024-11-27 14:18:19.004678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.582 [2024-11-27 14:18:19.007415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.582 [2024-11-27 14:18:19.007471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:48.582 BaseBdev3 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.582 BaseBdev4_malloc 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.582 [2024-11-27 14:18:19.055691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:48.582 [2024-11-27 14:18:19.055775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.582 [2024-11-27 14:18:19.055804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:48.582 [2024-11-27 14:18:19.055821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.582 [2024-11-27 14:18:19.058681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.582 [2024-11-27 14:18:19.058739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:48.582 BaseBdev4 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.582 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.841 spare_malloc 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.841 spare_delay 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.841 [2024-11-27 14:18:19.115894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:48.841 [2024-11-27 14:18:19.115966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.841 [2024-11-27 14:18:19.115990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:48.841 [2024-11-27 14:18:19.116006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.841 [2024-11-27 14:18:19.118631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.841 [2024-11-27 14:18:19.118687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:48.841 spare 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.841 [2024-11-27 14:18:19.123955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.841 [2024-11-27 14:18:19.126242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.841 [2024-11-27 14:18:19.126335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.841 [2024-11-27 14:18:19.126457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:48.841 [2024-11-27 14:18:19.126722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:48.841 [2024-11-27 14:18:19.126746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:48.841 [2024-11-27 14:18:19.127074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:48.841 [2024-11-27 14:18:19.127321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:48.841 [2024-11-27 14:18:19.127343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:48.841 [2024-11-27 14:18:19.127518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.841 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.841 "name": "raid_bdev1", 00:18:48.841 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:48.841 "strip_size_kb": 0, 00:18:48.841 "state": "online", 00:18:48.841 "raid_level": "raid1", 00:18:48.841 "superblock": true, 00:18:48.841 "num_base_bdevs": 4, 00:18:48.841 "num_base_bdevs_discovered": 4, 00:18:48.841 "num_base_bdevs_operational": 4, 00:18:48.841 "base_bdevs_list": [ 00:18:48.841 { 00:18:48.841 "name": "BaseBdev1", 00:18:48.841 "uuid": "987a1108-0a45-5612-b103-d92507077f16", 00:18:48.841 "is_configured": true, 00:18:48.841 "data_offset": 2048, 00:18:48.841 "data_size": 63488 00:18:48.841 }, 00:18:48.841 { 00:18:48.841 "name": "BaseBdev2", 00:18:48.841 "uuid": "2e7b10fd-ede4-56b8-9c96-16d4f00b58fa", 00:18:48.841 "is_configured": true, 00:18:48.841 "data_offset": 2048, 00:18:48.841 "data_size": 63488 00:18:48.841 }, 00:18:48.841 { 00:18:48.841 "name": "BaseBdev3", 00:18:48.841 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:48.841 "is_configured": true, 00:18:48.841 "data_offset": 2048, 00:18:48.841 "data_size": 63488 00:18:48.841 }, 00:18:48.841 { 00:18:48.841 "name": "BaseBdev4", 00:18:48.841 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:48.841 "is_configured": true, 00:18:48.841 "data_offset": 2048, 00:18:48.842 "data_size": 63488 00:18:48.842 } 00:18:48.842 ] 00:18:48.842 }' 00:18:48.842 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.842 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 [2024-11-27 14:18:19.640630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 [2024-11-27 14:18:19.740218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.409 "name": "raid_bdev1", 00:18:49.409 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:49.409 "strip_size_kb": 0, 00:18:49.409 "state": "online", 00:18:49.409 "raid_level": "raid1", 00:18:49.409 "superblock": true, 00:18:49.409 "num_base_bdevs": 4, 00:18:49.409 "num_base_bdevs_discovered": 3, 00:18:49.409 "num_base_bdevs_operational": 3, 00:18:49.409 "base_bdevs_list": [ 00:18:49.409 { 00:18:49.409 "name": null, 00:18:49.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.409 "is_configured": false, 00:18:49.409 "data_offset": 0, 00:18:49.409 "data_size": 63488 00:18:49.409 }, 00:18:49.409 { 00:18:49.409 "name": "BaseBdev2", 00:18:49.409 "uuid": "2e7b10fd-ede4-56b8-9c96-16d4f00b58fa", 00:18:49.409 "is_configured": true, 00:18:49.409 "data_offset": 2048, 00:18:49.409 "data_size": 63488 00:18:49.409 }, 00:18:49.409 { 00:18:49.409 "name": "BaseBdev3", 00:18:49.409 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:49.409 "is_configured": true, 00:18:49.409 "data_offset": 2048, 00:18:49.409 "data_size": 63488 00:18:49.409 }, 00:18:49.409 { 00:18:49.409 "name": "BaseBdev4", 00:18:49.409 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:49.409 "is_configured": true, 00:18:49.409 "data_offset": 2048, 00:18:49.409 "data_size": 63488 00:18:49.409 } 00:18:49.409 ] 00:18:49.409 }' 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.409 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.409 [2024-11-27 14:18:19.872845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:49.409 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:49.409 Zero copy mechanism will not be used. 00:18:49.409 Running I/O for 60 seconds... 00:18:49.977 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.977 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.977 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.977 [2024-11-27 14:18:20.275936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.977 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.977 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:49.977 [2024-11-27 14:18:20.360237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:49.977 [2024-11-27 14:18:20.362935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.237 [2024-11-27 14:18:20.491977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:50.237 [2024-11-27 14:18:20.493635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:50.237 [2024-11-27 14:18:20.721770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:50.237 [2024-11-27 14:18:20.722818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:50.756 150.00 IOPS, 450.00 MiB/s [2024-11-27T14:18:21.269Z] [2024-11-27 14:18:21.057567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:50.756 [2024-11-27 14:18:21.220065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:51.015 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.015 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.015 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.015 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.016 "name": "raid_bdev1", 00:18:51.016 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:51.016 "strip_size_kb": 0, 00:18:51.016 "state": "online", 00:18:51.016 "raid_level": "raid1", 00:18:51.016 "superblock": true, 00:18:51.016 "num_base_bdevs": 4, 00:18:51.016 "num_base_bdevs_discovered": 4, 00:18:51.016 "num_base_bdevs_operational": 4, 00:18:51.016 "process": { 00:18:51.016 "type": "rebuild", 00:18:51.016 "target": "spare", 00:18:51.016 "progress": { 00:18:51.016 "blocks": 10240, 00:18:51.016 "percent": 16 00:18:51.016 } 00:18:51.016 }, 00:18:51.016 "base_bdevs_list": [ 00:18:51.016 { 00:18:51.016 "name": "spare", 00:18:51.016 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:51.016 "is_configured": true, 00:18:51.016 "data_offset": 2048, 00:18:51.016 "data_size": 63488 00:18:51.016 }, 00:18:51.016 { 00:18:51.016 "name": "BaseBdev2", 00:18:51.016 "uuid": "2e7b10fd-ede4-56b8-9c96-16d4f00b58fa", 00:18:51.016 "is_configured": true, 00:18:51.016 "data_offset": 2048, 00:18:51.016 "data_size": 63488 00:18:51.016 }, 00:18:51.016 { 00:18:51.016 "name": "BaseBdev3", 00:18:51.016 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:51.016 "is_configured": true, 00:18:51.016 "data_offset": 2048, 00:18:51.016 "data_size": 63488 00:18:51.016 }, 00:18:51.016 { 00:18:51.016 "name": "BaseBdev4", 00:18:51.016 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:51.016 "is_configured": true, 00:18:51.016 "data_offset": 2048, 00:18:51.016 "data_size": 63488 00:18:51.016 } 00:18:51.016 ] 00:18:51.016 }' 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.016 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.016 [2024-11-27 14:18:21.509289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.276 [2024-11-27 14:18:21.578187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:51.276 [2024-11-27 14:18:21.580035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:51.276 [2024-11-27 14:18:21.691028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.276 [2024-11-27 14:18:21.704946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.276 [2024-11-27 14:18:21.705038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.276 [2024-11-27 14:18:21.705064] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.276 [2024-11-27 14:18:21.739612] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.276 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.536 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.536 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.536 "name": "raid_bdev1", 00:18:51.536 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:51.536 "strip_size_kb": 0, 00:18:51.536 "state": "online", 00:18:51.536 "raid_level": "raid1", 00:18:51.536 "superblock": true, 00:18:51.536 "num_base_bdevs": 4, 00:18:51.536 "num_base_bdevs_discovered": 3, 00:18:51.536 "num_base_bdevs_operational": 3, 00:18:51.536 "base_bdevs_list": [ 00:18:51.536 { 00:18:51.536 "name": null, 00:18:51.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.536 "is_configured": false, 00:18:51.536 "data_offset": 0, 00:18:51.536 "data_size": 63488 00:18:51.536 }, 00:18:51.536 { 00:18:51.536 "name": "BaseBdev2", 00:18:51.536 "uuid": "2e7b10fd-ede4-56b8-9c96-16d4f00b58fa", 00:18:51.536 "is_configured": true, 00:18:51.536 "data_offset": 2048, 00:18:51.536 "data_size": 63488 00:18:51.536 }, 00:18:51.536 { 00:18:51.536 "name": "BaseBdev3", 00:18:51.536 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:51.536 "is_configured": true, 00:18:51.536 "data_offset": 2048, 00:18:51.536 "data_size": 63488 00:18:51.536 }, 00:18:51.536 { 00:18:51.536 "name": "BaseBdev4", 00:18:51.536 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:51.536 "is_configured": true, 00:18:51.536 "data_offset": 2048, 00:18:51.536 "data_size": 63488 00:18:51.536 } 00:18:51.536 ] 00:18:51.536 }' 00:18:51.536 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.536 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.795 117.50 IOPS, 352.50 MiB/s [2024-11-27T14:18:22.308Z] 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.795 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.054 "name": "raid_bdev1", 00:18:52.054 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:52.054 "strip_size_kb": 0, 00:18:52.054 "state": "online", 00:18:52.054 "raid_level": "raid1", 00:18:52.054 "superblock": true, 00:18:52.054 "num_base_bdevs": 4, 00:18:52.054 "num_base_bdevs_discovered": 3, 00:18:52.054 "num_base_bdevs_operational": 3, 00:18:52.054 "base_bdevs_list": [ 00:18:52.054 { 00:18:52.054 "name": null, 00:18:52.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.054 "is_configured": false, 00:18:52.054 "data_offset": 0, 00:18:52.054 "data_size": 63488 00:18:52.054 }, 00:18:52.054 { 00:18:52.054 "name": "BaseBdev2", 00:18:52.054 "uuid": "2e7b10fd-ede4-56b8-9c96-16d4f00b58fa", 00:18:52.054 "is_configured": true, 00:18:52.054 "data_offset": 2048, 00:18:52.054 "data_size": 63488 00:18:52.054 }, 00:18:52.054 { 00:18:52.054 "name": "BaseBdev3", 00:18:52.054 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:52.054 "is_configured": true, 00:18:52.054 "data_offset": 2048, 00:18:52.054 "data_size": 63488 00:18:52.054 }, 00:18:52.054 { 00:18:52.054 "name": "BaseBdev4", 00:18:52.054 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:52.054 "is_configured": true, 00:18:52.054 "data_offset": 2048, 00:18:52.054 "data_size": 63488 00:18:52.054 } 00:18:52.054 ] 00:18:52.054 }' 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.054 [2024-11-27 14:18:22.447889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.054 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:52.054 [2024-11-27 14:18:22.543384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:52.054 [2024-11-27 14:18:22.546347] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.313 [2024-11-27 14:18:22.658422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:52.313 [2024-11-27 14:18:22.658997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:52.572 [2024-11-27 14:18:22.883632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:52.572 [2024-11-27 14:18:22.884560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:52.830 128.33 IOPS, 385.00 MiB/s [2024-11-27T14:18:23.343Z] [2024-11-27 14:18:23.214248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:52.830 [2024-11-27 14:18:23.214969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:53.088 [2024-11-27 14:18:23.365800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.088 "name": "raid_bdev1", 00:18:53.088 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:53.088 "strip_size_kb": 0, 00:18:53.088 "state": "online", 00:18:53.088 "raid_level": "raid1", 00:18:53.088 "superblock": true, 00:18:53.088 "num_base_bdevs": 4, 00:18:53.088 "num_base_bdevs_discovered": 4, 00:18:53.088 "num_base_bdevs_operational": 4, 00:18:53.088 "process": { 00:18:53.088 "type": "rebuild", 00:18:53.088 "target": "spare", 00:18:53.088 "progress": { 00:18:53.088 "blocks": 12288, 00:18:53.088 "percent": 19 00:18:53.088 } 00:18:53.088 }, 00:18:53.088 "base_bdevs_list": [ 00:18:53.088 { 00:18:53.088 "name": "spare", 00:18:53.088 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:53.088 "is_configured": true, 00:18:53.088 "data_offset": 2048, 00:18:53.088 "data_size": 63488 00:18:53.088 }, 00:18:53.088 { 00:18:53.088 "name": "BaseBdev2", 00:18:53.088 "uuid": "2e7b10fd-ede4-56b8-9c96-16d4f00b58fa", 00:18:53.088 "is_configured": true, 00:18:53.088 "data_offset": 2048, 00:18:53.088 "data_size": 63488 00:18:53.088 }, 00:18:53.088 { 00:18:53.088 "name": "BaseBdev3", 00:18:53.088 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:53.088 "is_configured": true, 00:18:53.088 "data_offset": 2048, 00:18:53.088 "data_size": 63488 00:18:53.088 }, 00:18:53.088 { 00:18:53.088 "name": "BaseBdev4", 00:18:53.088 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:53.088 "is_configured": true, 00:18:53.088 "data_offset": 2048, 00:18:53.088 "data_size": 63488 00:18:53.088 } 00:18:53.088 ] 00:18:53.088 }' 00:18:53.088 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.348 [2024-11-27 14:18:23.599899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:53.348 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.348 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.348 [2024-11-27 14:18:23.690550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:53.606 [2024-11-27 14:18:23.868668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:53.606 [2024-11-27 14:18:23.869647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:53.606 115.25 IOPS, 345.75 MiB/s [2024-11-27T14:18:24.119Z] [2024-11-27 14:18:24.082483] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:53.606 [2024-11-27 14:18:24.082566] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.606 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.865 "name": "raid_bdev1", 00:18:53.865 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:53.865 "strip_size_kb": 0, 00:18:53.865 "state": "online", 00:18:53.865 "raid_level": "raid1", 00:18:53.865 "superblock": true, 00:18:53.865 "num_base_bdevs": 4, 00:18:53.865 "num_base_bdevs_discovered": 3, 00:18:53.865 "num_base_bdevs_operational": 3, 00:18:53.865 "process": { 00:18:53.865 "type": "rebuild", 00:18:53.865 "target": "spare", 00:18:53.865 "progress": { 00:18:53.865 "blocks": 16384, 00:18:53.865 "percent": 25 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 "base_bdevs_list": [ 00:18:53.865 { 00:18:53.865 "name": "spare", 00:18:53.865 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:53.865 "is_configured": true, 00:18:53.865 "data_offset": 2048, 00:18:53.865 "data_size": 63488 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "name": null, 00:18:53.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.865 "is_configured": false, 00:18:53.865 "data_offset": 0, 00:18:53.865 "data_size": 63488 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "name": "BaseBdev3", 00:18:53.865 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:53.865 "is_configured": true, 00:18:53.865 "data_offset": 2048, 00:18:53.865 "data_size": 63488 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "name": "BaseBdev4", 00:18:53.865 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:53.865 "is_configured": true, 00:18:53.865 "data_offset": 2048, 00:18:53.865 "data_size": 63488 00:18:53.865 } 00:18:53.865 ] 00:18:53.865 }' 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=546 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.865 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.865 "name": "raid_bdev1", 00:18:53.865 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:53.866 "strip_size_kb": 0, 00:18:53.866 "state": "online", 00:18:53.866 "raid_level": "raid1", 00:18:53.866 "superblock": true, 00:18:53.866 "num_base_bdevs": 4, 00:18:53.866 "num_base_bdevs_discovered": 3, 00:18:53.866 "num_base_bdevs_operational": 3, 00:18:53.866 "process": { 00:18:53.866 "type": "rebuild", 00:18:53.866 "target": "spare", 00:18:53.866 "progress": { 00:18:53.866 "blocks": 18432, 00:18:53.866 "percent": 29 00:18:53.866 } 00:18:53.866 }, 00:18:53.866 "base_bdevs_list": [ 00:18:53.866 { 00:18:53.866 "name": "spare", 00:18:53.866 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:53.866 "is_configured": true, 00:18:53.866 "data_offset": 2048, 00:18:53.866 "data_size": 63488 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "name": null, 00:18:53.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.866 "is_configured": false, 00:18:53.866 "data_offset": 0, 00:18:53.866 "data_size": 63488 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "name": "BaseBdev3", 00:18:53.866 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:53.866 "is_configured": true, 00:18:53.866 "data_offset": 2048, 00:18:53.866 "data_size": 63488 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "name": "BaseBdev4", 00:18:53.866 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:53.866 "is_configured": true, 00:18:53.866 "data_offset": 2048, 00:18:53.866 "data_size": 63488 00:18:53.866 } 00:18:53.866 ] 00:18:53.866 }' 00:18:53.866 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.866 [2024-11-27 14:18:24.326506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:53.866 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.866 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.126 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.126 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.126 [2024-11-27 14:18:24.456954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:54.386 [2024-11-27 14:18:24.670042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:54.386 [2024-11-27 14:18:24.781468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:54.645 111.00 IOPS, 333.00 MiB/s [2024-11-27T14:18:25.158Z] [2024-11-27 14:18:24.996922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:54.645 [2024-11-27 14:18:25.116328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.213 [2024-11-27 14:18:25.441884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:55.213 [2024-11-27 14:18:25.443136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.213 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.213 "name": "raid_bdev1", 00:18:55.213 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:55.213 "strip_size_kb": 0, 00:18:55.213 "state": "online", 00:18:55.213 "raid_level": "raid1", 00:18:55.213 "superblock": true, 00:18:55.213 "num_base_bdevs": 4, 00:18:55.213 "num_base_bdevs_discovered": 3, 00:18:55.213 "num_base_bdevs_operational": 3, 00:18:55.213 "process": { 00:18:55.213 "type": "rebuild", 00:18:55.213 "target": "spare", 00:18:55.214 "progress": { 00:18:55.214 "blocks": 36864, 00:18:55.214 "percent": 58 00:18:55.214 } 00:18:55.214 }, 00:18:55.214 "base_bdevs_list": [ 00:18:55.214 { 00:18:55.214 "name": "spare", 00:18:55.214 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:55.214 "is_configured": true, 00:18:55.214 "data_offset": 2048, 00:18:55.214 "data_size": 63488 00:18:55.214 }, 00:18:55.214 { 00:18:55.214 "name": null, 00:18:55.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.214 "is_configured": false, 00:18:55.214 "data_offset": 0, 00:18:55.214 "data_size": 63488 00:18:55.214 }, 00:18:55.214 { 00:18:55.214 "name": "BaseBdev3", 00:18:55.214 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:55.214 "is_configured": true, 00:18:55.214 "data_offset": 2048, 00:18:55.214 "data_size": 63488 00:18:55.214 }, 00:18:55.214 { 00:18:55.214 "name": "BaseBdev4", 00:18:55.214 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:55.214 "is_configured": true, 00:18:55.214 "data_offset": 2048, 00:18:55.214 "data_size": 63488 00:18:55.214 } 00:18:55.214 ] 00:18:55.214 }' 00:18:55.214 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.214 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.214 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.214 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.214 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.214 [2024-11-27 14:18:25.648113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:56.429 99.17 IOPS, 297.50 MiB/s [2024-11-27T14:18:26.942Z] 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.429 "name": "raid_bdev1", 00:18:56.429 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:56.429 "strip_size_kb": 0, 00:18:56.429 "state": "online", 00:18:56.429 "raid_level": "raid1", 00:18:56.429 "superblock": true, 00:18:56.429 "num_base_bdevs": 4, 00:18:56.429 "num_base_bdevs_discovered": 3, 00:18:56.429 "num_base_bdevs_operational": 3, 00:18:56.429 "process": { 00:18:56.429 "type": "rebuild", 00:18:56.429 "target": "spare", 00:18:56.429 "progress": { 00:18:56.429 "blocks": 57344, 00:18:56.429 "percent": 90 00:18:56.429 } 00:18:56.429 }, 00:18:56.429 "base_bdevs_list": [ 00:18:56.429 { 00:18:56.429 "name": "spare", 00:18:56.429 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:56.429 "is_configured": true, 00:18:56.429 "data_offset": 2048, 00:18:56.429 "data_size": 63488 00:18:56.429 }, 00:18:56.429 { 00:18:56.429 "name": null, 00:18:56.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.429 "is_configured": false, 00:18:56.429 "data_offset": 0, 00:18:56.429 "data_size": 63488 00:18:56.429 }, 00:18:56.429 { 00:18:56.429 "name": "BaseBdev3", 00:18:56.429 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:56.429 "is_configured": true, 00:18:56.429 "data_offset": 2048, 00:18:56.429 "data_size": 63488 00:18:56.429 }, 00:18:56.429 { 00:18:56.429 "name": "BaseBdev4", 00:18:56.429 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:56.429 "is_configured": true, 00:18:56.429 "data_offset": 2048, 00:18:56.429 "data_size": 63488 00:18:56.429 } 00:18:56.429 ] 00:18:56.429 }' 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.429 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.429 [2024-11-27 14:18:26.875149] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:56.687 91.00 IOPS, 273.00 MiB/s [2024-11-27T14:18:27.201Z] [2024-11-27 14:18:26.973210] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:56.688 [2024-11-27 14:18:26.976898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.253 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.510 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.510 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.510 "name": "raid_bdev1", 00:18:57.510 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:57.510 "strip_size_kb": 0, 00:18:57.510 "state": "online", 00:18:57.510 "raid_level": "raid1", 00:18:57.510 "superblock": true, 00:18:57.510 "num_base_bdevs": 4, 00:18:57.510 "num_base_bdevs_discovered": 3, 00:18:57.511 "num_base_bdevs_operational": 3, 00:18:57.511 "base_bdevs_list": [ 00:18:57.511 { 00:18:57.511 "name": "spare", 00:18:57.511 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:57.511 "is_configured": true, 00:18:57.511 "data_offset": 2048, 00:18:57.511 "data_size": 63488 00:18:57.511 }, 00:18:57.511 { 00:18:57.511 "name": null, 00:18:57.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.511 "is_configured": false, 00:18:57.511 "data_offset": 0, 00:18:57.511 "data_size": 63488 00:18:57.511 }, 00:18:57.511 { 00:18:57.511 "name": "BaseBdev3", 00:18:57.511 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:57.511 "is_configured": true, 00:18:57.511 "data_offset": 2048, 00:18:57.511 "data_size": 63488 00:18:57.511 }, 00:18:57.511 { 00:18:57.511 "name": "BaseBdev4", 00:18:57.511 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:57.511 "is_configured": true, 00:18:57.511 "data_offset": 2048, 00:18:57.511 "data_size": 63488 00:18:57.511 } 00:18:57.511 ] 00:18:57.511 }' 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.511 83.75 IOPS, 251.25 MiB/s [2024-11-27T14:18:28.024Z] 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.511 "name": "raid_bdev1", 00:18:57.511 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:57.511 "strip_size_kb": 0, 00:18:57.511 "state": "online", 00:18:57.511 "raid_level": "raid1", 00:18:57.511 "superblock": true, 00:18:57.511 "num_base_bdevs": 4, 00:18:57.511 "num_base_bdevs_discovered": 3, 00:18:57.511 "num_base_bdevs_operational": 3, 00:18:57.511 "base_bdevs_list": [ 00:18:57.511 { 00:18:57.511 "name": "spare", 00:18:57.511 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:57.511 "is_configured": true, 00:18:57.511 "data_offset": 2048, 00:18:57.511 "data_size": 63488 00:18:57.511 }, 00:18:57.511 { 00:18:57.511 "name": null, 00:18:57.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.511 "is_configured": false, 00:18:57.511 "data_offset": 0, 00:18:57.511 "data_size": 63488 00:18:57.511 }, 00:18:57.511 { 00:18:57.511 "name": "BaseBdev3", 00:18:57.511 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:57.511 "is_configured": true, 00:18:57.511 "data_offset": 2048, 00:18:57.511 "data_size": 63488 00:18:57.511 }, 00:18:57.511 { 00:18:57.511 "name": "BaseBdev4", 00:18:57.511 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:57.511 "is_configured": true, 00:18:57.511 "data_offset": 2048, 00:18:57.511 "data_size": 63488 00:18:57.511 } 00:18:57.511 ] 00:18:57.511 }' 00:18:57.511 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.511 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.511 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.767 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.768 "name": "raid_bdev1", 00:18:57.768 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:18:57.768 "strip_size_kb": 0, 00:18:57.768 "state": "online", 00:18:57.768 "raid_level": "raid1", 00:18:57.768 "superblock": true, 00:18:57.768 "num_base_bdevs": 4, 00:18:57.768 "num_base_bdevs_discovered": 3, 00:18:57.768 "num_base_bdevs_operational": 3, 00:18:57.768 "base_bdevs_list": [ 00:18:57.768 { 00:18:57.768 "name": "spare", 00:18:57.768 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:18:57.768 "is_configured": true, 00:18:57.768 "data_offset": 2048, 00:18:57.768 "data_size": 63488 00:18:57.768 }, 00:18:57.768 { 00:18:57.768 "name": null, 00:18:57.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.768 "is_configured": false, 00:18:57.768 "data_offset": 0, 00:18:57.768 "data_size": 63488 00:18:57.768 }, 00:18:57.768 { 00:18:57.768 "name": "BaseBdev3", 00:18:57.768 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:18:57.768 "is_configured": true, 00:18:57.768 "data_offset": 2048, 00:18:57.768 "data_size": 63488 00:18:57.768 }, 00:18:57.768 { 00:18:57.768 "name": "BaseBdev4", 00:18:57.768 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:18:57.768 "is_configured": true, 00:18:57.768 "data_offset": 2048, 00:18:57.768 "data_size": 63488 00:18:57.768 } 00:18:57.768 ] 00:18:57.768 }' 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.768 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.332 [2024-11-27 14:18:28.610901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.332 [2024-11-27 14:18:28.610936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.332 00:18:58.332 Latency(us) 00:18:58.332 [2024-11-27T14:18:28.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.332 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:58.332 raid_bdev1 : 8.77 80.19 240.56 0.00 0.00 17785.70 279.27 126782.37 00:18:58.332 [2024-11-27T14:18:28.845Z] =================================================================================================================== 00:18:58.332 [2024-11-27T14:18:28.845Z] Total : 80.19 240.56 0.00 0.00 17785.70 279.27 126782.37 00:18:58.332 [2024-11-27 14:18:28.664172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.332 { 00:18:58.332 "results": [ 00:18:58.332 { 00:18:58.332 "job": "raid_bdev1", 00:18:58.332 "core_mask": "0x1", 00:18:58.332 "workload": "randrw", 00:18:58.332 "percentage": 50, 00:18:58.332 "status": "finished", 00:18:58.332 "queue_depth": 2, 00:18:58.332 "io_size": 3145728, 00:18:58.332 "runtime": 8.767124, 00:18:58.332 "iops": 80.1859309848931, 00:18:58.332 "mibps": 240.55779295467931, 00:18:58.332 "io_failed": 0, 00:18:58.332 "io_timeout": 0, 00:18:58.332 "avg_latency_us": 17785.697493857493, 00:18:58.332 "min_latency_us": 279.27272727272725, 00:18:58.332 "max_latency_us": 126782.37090909091 00:18:58.332 } 00:18:58.332 ], 00:18:58.332 "core_count": 1 00:18:58.332 } 00:18:58.332 [2024-11-27 14:18:28.664403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.332 [2024-11-27 14:18:28.664547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.332 [2024-11-27 14:18:28.664564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.332 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:58.592 /dev/nbd0 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.592 1+0 records in 00:18:58.592 1+0 records out 00:18:58.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376874 s, 10.9 MB/s 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.592 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:59.162 /dev/nbd1 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:59.162 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.163 1+0 records in 00:18:59.163 1+0 records out 00:18:59.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413495 s, 9.9 MB/s 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.163 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.729 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:59.730 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.730 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.730 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:59.988 /dev/nbd1 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.988 1+0 records in 00:18:59.988 1+0 records out 00:18:59.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602354 s, 6.8 MB/s 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.988 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.555 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.813 [2024-11-27 14:18:31.148520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.813 [2024-11-27 14:18:31.148613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.813 [2024-11-27 14:18:31.148667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:00.813 [2024-11-27 14:18:31.148682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.813 [2024-11-27 14:18:31.151903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.813 [2024-11-27 14:18:31.151948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.813 [2024-11-27 14:18:31.152074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:00.813 [2024-11-27 14:18:31.152139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.813 [2024-11-27 14:18:31.152325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.813 [2024-11-27 14:18:31.152480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:00.813 spare 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.813 [2024-11-27 14:18:31.252698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:00.813 [2024-11-27 14:18:31.252841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:00.813 [2024-11-27 14:18:31.253533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:19:00.813 [2024-11-27 14:18:31.254020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:00.813 [2024-11-27 14:18:31.254054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:00.813 [2024-11-27 14:18:31.254453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.813 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.813 "name": "raid_bdev1", 00:19:00.814 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:00.814 "strip_size_kb": 0, 00:19:00.814 "state": "online", 00:19:00.814 "raid_level": "raid1", 00:19:00.814 "superblock": true, 00:19:00.814 "num_base_bdevs": 4, 00:19:00.814 "num_base_bdevs_discovered": 3, 00:19:00.814 "num_base_bdevs_operational": 3, 00:19:00.814 "base_bdevs_list": [ 00:19:00.814 { 00:19:00.814 "name": "spare", 00:19:00.814 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:19:00.814 "is_configured": true, 00:19:00.814 "data_offset": 2048, 00:19:00.814 "data_size": 63488 00:19:00.814 }, 00:19:00.814 { 00:19:00.814 "name": null, 00:19:00.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.814 "is_configured": false, 00:19:00.814 "data_offset": 2048, 00:19:00.814 "data_size": 63488 00:19:00.814 }, 00:19:00.814 { 00:19:00.814 "name": "BaseBdev3", 00:19:00.814 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:00.814 "is_configured": true, 00:19:00.814 "data_offset": 2048, 00:19:00.814 "data_size": 63488 00:19:00.814 }, 00:19:00.814 { 00:19:00.814 "name": "BaseBdev4", 00:19:00.814 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:00.814 "is_configured": true, 00:19:00.814 "data_offset": 2048, 00:19:00.814 "data_size": 63488 00:19:00.814 } 00:19:00.814 ] 00:19:00.814 }' 00:19:00.814 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.814 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.380 "name": "raid_bdev1", 00:19:01.380 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:01.380 "strip_size_kb": 0, 00:19:01.380 "state": "online", 00:19:01.380 "raid_level": "raid1", 00:19:01.380 "superblock": true, 00:19:01.380 "num_base_bdevs": 4, 00:19:01.380 "num_base_bdevs_discovered": 3, 00:19:01.380 "num_base_bdevs_operational": 3, 00:19:01.380 "base_bdevs_list": [ 00:19:01.380 { 00:19:01.380 "name": "spare", 00:19:01.380 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:19:01.380 "is_configured": true, 00:19:01.380 "data_offset": 2048, 00:19:01.380 "data_size": 63488 00:19:01.380 }, 00:19:01.380 { 00:19:01.380 "name": null, 00:19:01.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.380 "is_configured": false, 00:19:01.380 "data_offset": 2048, 00:19:01.380 "data_size": 63488 00:19:01.380 }, 00:19:01.380 { 00:19:01.380 "name": "BaseBdev3", 00:19:01.380 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:01.380 "is_configured": true, 00:19:01.380 "data_offset": 2048, 00:19:01.380 "data_size": 63488 00:19:01.380 }, 00:19:01.380 { 00:19:01.380 "name": "BaseBdev4", 00:19:01.380 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:01.380 "is_configured": true, 00:19:01.380 "data_offset": 2048, 00:19:01.380 "data_size": 63488 00:19:01.380 } 00:19:01.380 ] 00:19:01.380 }' 00:19:01.380 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.639 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 [2024-11-27 14:18:32.005201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.639 "name": "raid_bdev1", 00:19:01.639 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:01.639 "strip_size_kb": 0, 00:19:01.639 "state": "online", 00:19:01.639 "raid_level": "raid1", 00:19:01.639 "superblock": true, 00:19:01.639 "num_base_bdevs": 4, 00:19:01.639 "num_base_bdevs_discovered": 2, 00:19:01.639 "num_base_bdevs_operational": 2, 00:19:01.639 "base_bdevs_list": [ 00:19:01.639 { 00:19:01.639 "name": null, 00:19:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.639 "is_configured": false, 00:19:01.639 "data_offset": 0, 00:19:01.639 "data_size": 63488 00:19:01.639 }, 00:19:01.639 { 00:19:01.639 "name": null, 00:19:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.639 "is_configured": false, 00:19:01.639 "data_offset": 2048, 00:19:01.639 "data_size": 63488 00:19:01.639 }, 00:19:01.639 { 00:19:01.639 "name": "BaseBdev3", 00:19:01.639 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:01.639 "is_configured": true, 00:19:01.639 "data_offset": 2048, 00:19:01.639 "data_size": 63488 00:19:01.639 }, 00:19:01.639 { 00:19:01.639 "name": "BaseBdev4", 00:19:01.639 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:01.639 "is_configured": true, 00:19:01.639 "data_offset": 2048, 00:19:01.639 "data_size": 63488 00:19:01.639 } 00:19:01.639 ] 00:19:01.639 }' 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.639 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.206 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.206 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.206 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.206 [2024-11-27 14:18:32.537517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.206 [2024-11-27 14:18:32.538012] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:02.206 [2024-11-27 14:18:32.538173] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:02.206 [2024-11-27 14:18:32.538234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.206 [2024-11-27 14:18:32.552825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:19:02.206 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.206 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:02.206 [2024-11-27 14:18:32.555459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.142 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.142 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.142 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.142 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.142 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.143 "name": "raid_bdev1", 00:19:03.143 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:03.143 "strip_size_kb": 0, 00:19:03.143 "state": "online", 00:19:03.143 "raid_level": "raid1", 00:19:03.143 "superblock": true, 00:19:03.143 "num_base_bdevs": 4, 00:19:03.143 "num_base_bdevs_discovered": 3, 00:19:03.143 "num_base_bdevs_operational": 3, 00:19:03.143 "process": { 00:19:03.143 "type": "rebuild", 00:19:03.143 "target": "spare", 00:19:03.143 "progress": { 00:19:03.143 "blocks": 20480, 00:19:03.143 "percent": 32 00:19:03.143 } 00:19:03.143 }, 00:19:03.143 "base_bdevs_list": [ 00:19:03.143 { 00:19:03.143 "name": "spare", 00:19:03.143 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:19:03.143 "is_configured": true, 00:19:03.143 "data_offset": 2048, 00:19:03.143 "data_size": 63488 00:19:03.143 }, 00:19:03.143 { 00:19:03.143 "name": null, 00:19:03.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.143 "is_configured": false, 00:19:03.143 "data_offset": 2048, 00:19:03.143 "data_size": 63488 00:19:03.143 }, 00:19:03.143 { 00:19:03.143 "name": "BaseBdev3", 00:19:03.143 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:03.143 "is_configured": true, 00:19:03.143 "data_offset": 2048, 00:19:03.143 "data_size": 63488 00:19:03.143 }, 00:19:03.143 { 00:19:03.143 "name": "BaseBdev4", 00:19:03.143 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:03.143 "is_configured": true, 00:19:03.143 "data_offset": 2048, 00:19:03.143 "data_size": 63488 00:19:03.143 } 00:19:03.143 ] 00:19:03.143 }' 00:19:03.143 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.401 [2024-11-27 14:18:33.737117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.401 [2024-11-27 14:18:33.765022] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.401 [2024-11-27 14:18:33.765104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.401 [2024-11-27 14:18:33.765135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.401 [2024-11-27 14:18:33.765146] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.401 "name": "raid_bdev1", 00:19:03.401 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:03.401 "strip_size_kb": 0, 00:19:03.401 "state": "online", 00:19:03.401 "raid_level": "raid1", 00:19:03.401 "superblock": true, 00:19:03.401 "num_base_bdevs": 4, 00:19:03.401 "num_base_bdevs_discovered": 2, 00:19:03.401 "num_base_bdevs_operational": 2, 00:19:03.401 "base_bdevs_list": [ 00:19:03.401 { 00:19:03.401 "name": null, 00:19:03.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.401 "is_configured": false, 00:19:03.401 "data_offset": 0, 00:19:03.401 "data_size": 63488 00:19:03.401 }, 00:19:03.401 { 00:19:03.401 "name": null, 00:19:03.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.401 "is_configured": false, 00:19:03.401 "data_offset": 2048, 00:19:03.401 "data_size": 63488 00:19:03.401 }, 00:19:03.401 { 00:19:03.401 "name": "BaseBdev3", 00:19:03.401 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:03.401 "is_configured": true, 00:19:03.401 "data_offset": 2048, 00:19:03.401 "data_size": 63488 00:19:03.401 }, 00:19:03.401 { 00:19:03.401 "name": "BaseBdev4", 00:19:03.401 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:03.401 "is_configured": true, 00:19:03.401 "data_offset": 2048, 00:19:03.401 "data_size": 63488 00:19:03.401 } 00:19:03.401 ] 00:19:03.401 }' 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.401 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.967 14:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.967 14:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.967 14:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.967 [2024-11-27 14:18:34.374072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.967 [2024-11-27 14:18:34.374156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.967 [2024-11-27 14:18:34.374201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:03.967 [2024-11-27 14:18:34.374218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.967 [2024-11-27 14:18:34.374891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.967 [2024-11-27 14:18:34.374923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.967 [2024-11-27 14:18:34.375054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:03.967 [2024-11-27 14:18:34.375073] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:03.967 [2024-11-27 14:18:34.375100] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:03.967 [2024-11-27 14:18:34.375150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.967 [2024-11-27 14:18:34.389852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:19:03.967 spare 00:19:03.967 14:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.967 14:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:03.967 [2024-11-27 14:18:34.392558] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.903 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.162 "name": "raid_bdev1", 00:19:05.162 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:05.162 "strip_size_kb": 0, 00:19:05.162 "state": "online", 00:19:05.162 "raid_level": "raid1", 00:19:05.162 "superblock": true, 00:19:05.162 "num_base_bdevs": 4, 00:19:05.162 "num_base_bdevs_discovered": 3, 00:19:05.162 "num_base_bdevs_operational": 3, 00:19:05.162 "process": { 00:19:05.162 "type": "rebuild", 00:19:05.162 "target": "spare", 00:19:05.162 "progress": { 00:19:05.162 "blocks": 20480, 00:19:05.162 "percent": 32 00:19:05.162 } 00:19:05.162 }, 00:19:05.162 "base_bdevs_list": [ 00:19:05.162 { 00:19:05.162 "name": "spare", 00:19:05.162 "uuid": "af395873-0884-5d1d-8bbc-6919e4f2a3c2", 00:19:05.162 "is_configured": true, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 }, 00:19:05.162 { 00:19:05.162 "name": null, 00:19:05.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.162 "is_configured": false, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 }, 00:19:05.162 { 00:19:05.162 "name": "BaseBdev3", 00:19:05.162 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:05.162 "is_configured": true, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 }, 00:19:05.162 { 00:19:05.162 "name": "BaseBdev4", 00:19:05.162 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:05.162 "is_configured": true, 00:19:05.162 "data_offset": 2048, 00:19:05.162 "data_size": 63488 00:19:05.162 } 00:19:05.162 ] 00:19:05.162 }' 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.162 [2024-11-27 14:18:35.582611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.162 [2024-11-27 14:18:35.602372] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.162 [2024-11-27 14:18:35.602600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.162 [2024-11-27 14:18:35.602734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.162 [2024-11-27 14:18:35.602790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.162 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.163 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.421 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.421 "name": "raid_bdev1", 00:19:05.421 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:05.421 "strip_size_kb": 0, 00:19:05.421 "state": "online", 00:19:05.421 "raid_level": "raid1", 00:19:05.421 "superblock": true, 00:19:05.421 "num_base_bdevs": 4, 00:19:05.421 "num_base_bdevs_discovered": 2, 00:19:05.421 "num_base_bdevs_operational": 2, 00:19:05.421 "base_bdevs_list": [ 00:19:05.421 { 00:19:05.421 "name": null, 00:19:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.421 "is_configured": false, 00:19:05.421 "data_offset": 0, 00:19:05.421 "data_size": 63488 00:19:05.421 }, 00:19:05.421 { 00:19:05.421 "name": null, 00:19:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.421 "is_configured": false, 00:19:05.421 "data_offset": 2048, 00:19:05.421 "data_size": 63488 00:19:05.421 }, 00:19:05.421 { 00:19:05.421 "name": "BaseBdev3", 00:19:05.421 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:05.421 "is_configured": true, 00:19:05.421 "data_offset": 2048, 00:19:05.421 "data_size": 63488 00:19:05.421 }, 00:19:05.421 { 00:19:05.421 "name": "BaseBdev4", 00:19:05.421 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:05.421 "is_configured": true, 00:19:05.421 "data_offset": 2048, 00:19:05.421 "data_size": 63488 00:19:05.421 } 00:19:05.421 ] 00:19:05.421 }' 00:19:05.421 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.421 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.991 "name": "raid_bdev1", 00:19:05.991 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:05.991 "strip_size_kb": 0, 00:19:05.991 "state": "online", 00:19:05.991 "raid_level": "raid1", 00:19:05.991 "superblock": true, 00:19:05.991 "num_base_bdevs": 4, 00:19:05.991 "num_base_bdevs_discovered": 2, 00:19:05.991 "num_base_bdevs_operational": 2, 00:19:05.991 "base_bdevs_list": [ 00:19:05.991 { 00:19:05.991 "name": null, 00:19:05.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.991 "is_configured": false, 00:19:05.991 "data_offset": 0, 00:19:05.991 "data_size": 63488 00:19:05.991 }, 00:19:05.991 { 00:19:05.991 "name": null, 00:19:05.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.991 "is_configured": false, 00:19:05.991 "data_offset": 2048, 00:19:05.991 "data_size": 63488 00:19:05.991 }, 00:19:05.991 { 00:19:05.991 "name": "BaseBdev3", 00:19:05.991 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:05.991 "is_configured": true, 00:19:05.991 "data_offset": 2048, 00:19:05.991 "data_size": 63488 00:19:05.991 }, 00:19:05.991 { 00:19:05.991 "name": "BaseBdev4", 00:19:05.991 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:05.991 "is_configured": true, 00:19:05.991 "data_offset": 2048, 00:19:05.991 "data_size": 63488 00:19:05.991 } 00:19:05.991 ] 00:19:05.991 }' 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.991 [2024-11-27 14:18:36.363815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.991 [2024-11-27 14:18:36.363895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.991 [2024-11-27 14:18:36.363925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:19:05.991 [2024-11-27 14:18:36.363944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.991 [2024-11-27 14:18:36.364641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.991 [2024-11-27 14:18:36.364677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.991 [2024-11-27 14:18:36.364806] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:05.991 [2024-11-27 14:18:36.364832] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:05.991 [2024-11-27 14:18:36.364857] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:05.991 [2024-11-27 14:18:36.364880] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:05.991 BaseBdev1 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.991 14:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.928 "name": "raid_bdev1", 00:19:06.928 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:06.928 "strip_size_kb": 0, 00:19:06.928 "state": "online", 00:19:06.928 "raid_level": "raid1", 00:19:06.928 "superblock": true, 00:19:06.928 "num_base_bdevs": 4, 00:19:06.928 "num_base_bdevs_discovered": 2, 00:19:06.928 "num_base_bdevs_operational": 2, 00:19:06.928 "base_bdevs_list": [ 00:19:06.928 { 00:19:06.928 "name": null, 00:19:06.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.928 "is_configured": false, 00:19:06.928 "data_offset": 0, 00:19:06.928 "data_size": 63488 00:19:06.928 }, 00:19:06.928 { 00:19:06.928 "name": null, 00:19:06.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.928 "is_configured": false, 00:19:06.928 "data_offset": 2048, 00:19:06.928 "data_size": 63488 00:19:06.928 }, 00:19:06.928 { 00:19:06.928 "name": "BaseBdev3", 00:19:06.928 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:06.928 "is_configured": true, 00:19:06.928 "data_offset": 2048, 00:19:06.928 "data_size": 63488 00:19:06.928 }, 00:19:06.928 { 00:19:06.928 "name": "BaseBdev4", 00:19:06.928 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:06.928 "is_configured": true, 00:19:06.928 "data_offset": 2048, 00:19:06.928 "data_size": 63488 00:19:06.928 } 00:19:06.928 ] 00:19:06.928 }' 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.928 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.495 14:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.754 "name": "raid_bdev1", 00:19:07.754 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:07.754 "strip_size_kb": 0, 00:19:07.754 "state": "online", 00:19:07.754 "raid_level": "raid1", 00:19:07.754 "superblock": true, 00:19:07.754 "num_base_bdevs": 4, 00:19:07.754 "num_base_bdevs_discovered": 2, 00:19:07.754 "num_base_bdevs_operational": 2, 00:19:07.754 "base_bdevs_list": [ 00:19:07.754 { 00:19:07.754 "name": null, 00:19:07.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.754 "is_configured": false, 00:19:07.754 "data_offset": 0, 00:19:07.754 "data_size": 63488 00:19:07.754 }, 00:19:07.754 { 00:19:07.754 "name": null, 00:19:07.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.754 "is_configured": false, 00:19:07.754 "data_offset": 2048, 00:19:07.754 "data_size": 63488 00:19:07.754 }, 00:19:07.754 { 00:19:07.754 "name": "BaseBdev3", 00:19:07.754 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:07.754 "is_configured": true, 00:19:07.754 "data_offset": 2048, 00:19:07.754 "data_size": 63488 00:19:07.754 }, 00:19:07.754 { 00:19:07.754 "name": "BaseBdev4", 00:19:07.754 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:07.754 "is_configured": true, 00:19:07.754 "data_offset": 2048, 00:19:07.754 "data_size": 63488 00:19:07.754 } 00:19:07.754 ] 00:19:07.754 }' 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.754 [2024-11-27 14:18:38.136874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.754 [2024-11-27 14:18:38.137170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:07.754 [2024-11-27 14:18:38.137205] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:07.754 request: 00:19:07.754 { 00:19:07.754 "base_bdev": "BaseBdev1", 00:19:07.754 "raid_bdev": "raid_bdev1", 00:19:07.754 "method": "bdev_raid_add_base_bdev", 00:19:07.754 "req_id": 1 00:19:07.754 } 00:19:07.754 Got JSON-RPC error response 00:19:07.754 response: 00:19:07.754 { 00:19:07.754 "code": -22, 00:19:07.754 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:07.754 } 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:07.754 14:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.689 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.690 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.947 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.947 "name": "raid_bdev1", 00:19:08.947 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:08.947 "strip_size_kb": 0, 00:19:08.947 "state": "online", 00:19:08.947 "raid_level": "raid1", 00:19:08.947 "superblock": true, 00:19:08.947 "num_base_bdevs": 4, 00:19:08.947 "num_base_bdevs_discovered": 2, 00:19:08.947 "num_base_bdevs_operational": 2, 00:19:08.947 "base_bdevs_list": [ 00:19:08.947 { 00:19:08.947 "name": null, 00:19:08.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.947 "is_configured": false, 00:19:08.948 "data_offset": 0, 00:19:08.948 "data_size": 63488 00:19:08.948 }, 00:19:08.948 { 00:19:08.948 "name": null, 00:19:08.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.948 "is_configured": false, 00:19:08.948 "data_offset": 2048, 00:19:08.948 "data_size": 63488 00:19:08.948 }, 00:19:08.948 { 00:19:08.948 "name": "BaseBdev3", 00:19:08.948 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:08.948 "is_configured": true, 00:19:08.948 "data_offset": 2048, 00:19:08.948 "data_size": 63488 00:19:08.948 }, 00:19:08.948 { 00:19:08.948 "name": "BaseBdev4", 00:19:08.948 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:08.948 "is_configured": true, 00:19:08.948 "data_offset": 2048, 00:19:08.948 "data_size": 63488 00:19:08.948 } 00:19:08.948 ] 00:19:08.948 }' 00:19:08.948 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.948 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.206 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.464 "name": "raid_bdev1", 00:19:09.464 "uuid": "29669840-4fa8-4429-8bff-49732ab5c78e", 00:19:09.464 "strip_size_kb": 0, 00:19:09.464 "state": "online", 00:19:09.464 "raid_level": "raid1", 00:19:09.464 "superblock": true, 00:19:09.464 "num_base_bdevs": 4, 00:19:09.464 "num_base_bdevs_discovered": 2, 00:19:09.464 "num_base_bdevs_operational": 2, 00:19:09.464 "base_bdevs_list": [ 00:19:09.464 { 00:19:09.464 "name": null, 00:19:09.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.464 "is_configured": false, 00:19:09.464 "data_offset": 0, 00:19:09.464 "data_size": 63488 00:19:09.464 }, 00:19:09.464 { 00:19:09.464 "name": null, 00:19:09.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.464 "is_configured": false, 00:19:09.464 "data_offset": 2048, 00:19:09.464 "data_size": 63488 00:19:09.464 }, 00:19:09.464 { 00:19:09.464 "name": "BaseBdev3", 00:19:09.464 "uuid": "d2123673-ceae-5fa1-a57d-e0191447ba2e", 00:19:09.464 "is_configured": true, 00:19:09.464 "data_offset": 2048, 00:19:09.464 "data_size": 63488 00:19:09.464 }, 00:19:09.464 { 00:19:09.464 "name": "BaseBdev4", 00:19:09.464 "uuid": "76199f15-b25c-516a-a893-335d426e17f2", 00:19:09.464 "is_configured": true, 00:19:09.464 "data_offset": 2048, 00:19:09.464 "data_size": 63488 00:19:09.464 } 00:19:09.464 ] 00:19:09.464 }' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79710 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79710 ']' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79710 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79710 00:19:09.464 killing process with pid 79710 00:19:09.464 Received shutdown signal, test time was about 20.030949 seconds 00:19:09.464 00:19:09.464 Latency(us) 00:19:09.464 [2024-11-27T14:18:39.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.464 [2024-11-27T14:18:39.977Z] =================================================================================================================== 00:19:09.464 [2024-11-27T14:18:39.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79710' 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79710 00:19:09.464 14:18:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79710 00:19:09.464 [2024-11-27 14:18:39.906651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.464 [2024-11-27 14:18:39.906873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.464 [2024-11-27 14:18:39.907008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.464 [2024-11-27 14:18:39.907037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:10.029 [2024-11-27 14:18:40.338687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.462 14:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:11.462 00:19:11.462 real 0m23.802s 00:19:11.462 user 0m32.559s 00:19:11.462 sys 0m2.514s 00:19:11.462 14:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.462 14:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.462 ************************************ 00:19:11.462 END TEST raid_rebuild_test_sb_io 00:19:11.462 ************************************ 00:19:11.462 14:18:41 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:11.462 14:18:41 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:19:11.462 14:18:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:11.462 14:18:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.462 14:18:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.462 ************************************ 00:19:11.462 START TEST raid5f_state_function_test 00:19:11.462 ************************************ 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.462 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80449 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:11.463 Process raid pid: 80449 00:19:11.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80449' 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80449 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80449 ']' 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.463 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.463 [2024-11-27 14:18:41.737661] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:19:11.463 [2024-11-27 14:18:41.737847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.463 [2024-11-27 14:18:41.916603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.721 [2024-11-27 14:18:42.052269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.979 [2024-11-27 14:18:42.271468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.979 [2024-11-27 14:18:42.271511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.549 [2024-11-27 14:18:42.796974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.549 [2024-11-27 14:18:42.797038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.549 [2024-11-27 14:18:42.797055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.549 [2024-11-27 14:18:42.797071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.549 [2024-11-27 14:18:42.797080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.549 [2024-11-27 14:18:42.797094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.549 "name": "Existed_Raid", 00:19:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.549 "strip_size_kb": 64, 00:19:12.549 "state": "configuring", 00:19:12.549 "raid_level": "raid5f", 00:19:12.549 "superblock": false, 00:19:12.549 "num_base_bdevs": 3, 00:19:12.549 "num_base_bdevs_discovered": 0, 00:19:12.549 "num_base_bdevs_operational": 3, 00:19:12.549 "base_bdevs_list": [ 00:19:12.549 { 00:19:12.549 "name": "BaseBdev1", 00:19:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.549 "is_configured": false, 00:19:12.549 "data_offset": 0, 00:19:12.549 "data_size": 0 00:19:12.549 }, 00:19:12.549 { 00:19:12.549 "name": "BaseBdev2", 00:19:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.549 "is_configured": false, 00:19:12.549 "data_offset": 0, 00:19:12.549 "data_size": 0 00:19:12.549 }, 00:19:12.549 { 00:19:12.549 "name": "BaseBdev3", 00:19:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.549 "is_configured": false, 00:19:12.549 "data_offset": 0, 00:19:12.549 "data_size": 0 00:19:12.549 } 00:19:12.549 ] 00:19:12.549 }' 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.549 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 [2024-11-27 14:18:43.329150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.117 [2024-11-27 14:18:43.329341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 [2024-11-27 14:18:43.337124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.117 [2024-11-27 14:18:43.337344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.117 [2024-11-27 14:18:43.337475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.117 [2024-11-27 14:18:43.337536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.117 [2024-11-27 14:18:43.337702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:13.117 [2024-11-27 14:18:43.337761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 [2024-11-27 14:18:43.388474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.117 BaseBdev1 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.118 [ 00:19:13.118 { 00:19:13.118 "name": "BaseBdev1", 00:19:13.118 "aliases": [ 00:19:13.118 "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2" 00:19:13.118 ], 00:19:13.118 "product_name": "Malloc disk", 00:19:13.118 "block_size": 512, 00:19:13.118 "num_blocks": 65536, 00:19:13.118 "uuid": "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2", 00:19:13.118 "assigned_rate_limits": { 00:19:13.118 "rw_ios_per_sec": 0, 00:19:13.118 "rw_mbytes_per_sec": 0, 00:19:13.118 "r_mbytes_per_sec": 0, 00:19:13.118 "w_mbytes_per_sec": 0 00:19:13.118 }, 00:19:13.118 "claimed": true, 00:19:13.118 "claim_type": "exclusive_write", 00:19:13.118 "zoned": false, 00:19:13.118 "supported_io_types": { 00:19:13.118 "read": true, 00:19:13.118 "write": true, 00:19:13.118 "unmap": true, 00:19:13.118 "flush": true, 00:19:13.118 "reset": true, 00:19:13.118 "nvme_admin": false, 00:19:13.118 "nvme_io": false, 00:19:13.118 "nvme_io_md": false, 00:19:13.118 "write_zeroes": true, 00:19:13.118 "zcopy": true, 00:19:13.118 "get_zone_info": false, 00:19:13.118 "zone_management": false, 00:19:13.118 "zone_append": false, 00:19:13.118 "compare": false, 00:19:13.118 "compare_and_write": false, 00:19:13.118 "abort": true, 00:19:13.118 "seek_hole": false, 00:19:13.118 "seek_data": false, 00:19:13.118 "copy": true, 00:19:13.118 "nvme_iov_md": false 00:19:13.118 }, 00:19:13.118 "memory_domains": [ 00:19:13.118 { 00:19:13.118 "dma_device_id": "system", 00:19:13.118 "dma_device_type": 1 00:19:13.118 }, 00:19:13.118 { 00:19:13.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.118 "dma_device_type": 2 00:19:13.118 } 00:19:13.118 ], 00:19:13.118 "driver_specific": {} 00:19:13.118 } 00:19:13.118 ] 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.118 "name": "Existed_Raid", 00:19:13.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.118 "strip_size_kb": 64, 00:19:13.118 "state": "configuring", 00:19:13.118 "raid_level": "raid5f", 00:19:13.118 "superblock": false, 00:19:13.118 "num_base_bdevs": 3, 00:19:13.118 "num_base_bdevs_discovered": 1, 00:19:13.118 "num_base_bdevs_operational": 3, 00:19:13.118 "base_bdevs_list": [ 00:19:13.118 { 00:19:13.118 "name": "BaseBdev1", 00:19:13.118 "uuid": "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2", 00:19:13.118 "is_configured": true, 00:19:13.118 "data_offset": 0, 00:19:13.118 "data_size": 65536 00:19:13.118 }, 00:19:13.118 { 00:19:13.118 "name": "BaseBdev2", 00:19:13.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.118 "is_configured": false, 00:19:13.118 "data_offset": 0, 00:19:13.118 "data_size": 0 00:19:13.118 }, 00:19:13.118 { 00:19:13.118 "name": "BaseBdev3", 00:19:13.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.118 "is_configured": false, 00:19:13.118 "data_offset": 0, 00:19:13.118 "data_size": 0 00:19:13.118 } 00:19:13.118 ] 00:19:13.118 }' 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.118 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.685 [2024-11-27 14:18:43.944698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.685 [2024-11-27 14:18:43.944777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.685 [2024-11-27 14:18:43.952734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.685 [2024-11-27 14:18:43.955544] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.685 [2024-11-27 14:18:43.955771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.685 [2024-11-27 14:18:43.955919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:13.685 [2024-11-27 14:18:43.955981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.685 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.686 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.686 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.686 "name": "Existed_Raid", 00:19:13.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.686 "strip_size_kb": 64, 00:19:13.686 "state": "configuring", 00:19:13.686 "raid_level": "raid5f", 00:19:13.686 "superblock": false, 00:19:13.686 "num_base_bdevs": 3, 00:19:13.686 "num_base_bdevs_discovered": 1, 00:19:13.686 "num_base_bdevs_operational": 3, 00:19:13.686 "base_bdevs_list": [ 00:19:13.686 { 00:19:13.686 "name": "BaseBdev1", 00:19:13.686 "uuid": "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2", 00:19:13.686 "is_configured": true, 00:19:13.686 "data_offset": 0, 00:19:13.686 "data_size": 65536 00:19:13.686 }, 00:19:13.686 { 00:19:13.686 "name": "BaseBdev2", 00:19:13.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.686 "is_configured": false, 00:19:13.686 "data_offset": 0, 00:19:13.686 "data_size": 0 00:19:13.686 }, 00:19:13.686 { 00:19:13.686 "name": "BaseBdev3", 00:19:13.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.686 "is_configured": false, 00:19:13.686 "data_offset": 0, 00:19:13.686 "data_size": 0 00:19:13.686 } 00:19:13.686 ] 00:19:13.686 }' 00:19:13.686 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.686 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.253 [2024-11-27 14:18:44.541740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.253 BaseBdev2 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.253 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.253 [ 00:19:14.253 { 00:19:14.253 "name": "BaseBdev2", 00:19:14.253 "aliases": [ 00:19:14.253 "40cf2cd9-a4bb-46e7-b359-331789cc356b" 00:19:14.253 ], 00:19:14.253 "product_name": "Malloc disk", 00:19:14.253 "block_size": 512, 00:19:14.253 "num_blocks": 65536, 00:19:14.253 "uuid": "40cf2cd9-a4bb-46e7-b359-331789cc356b", 00:19:14.253 "assigned_rate_limits": { 00:19:14.253 "rw_ios_per_sec": 0, 00:19:14.253 "rw_mbytes_per_sec": 0, 00:19:14.253 "r_mbytes_per_sec": 0, 00:19:14.253 "w_mbytes_per_sec": 0 00:19:14.253 }, 00:19:14.253 "claimed": true, 00:19:14.253 "claim_type": "exclusive_write", 00:19:14.253 "zoned": false, 00:19:14.253 "supported_io_types": { 00:19:14.253 "read": true, 00:19:14.253 "write": true, 00:19:14.253 "unmap": true, 00:19:14.253 "flush": true, 00:19:14.253 "reset": true, 00:19:14.253 "nvme_admin": false, 00:19:14.253 "nvme_io": false, 00:19:14.253 "nvme_io_md": false, 00:19:14.253 "write_zeroes": true, 00:19:14.253 "zcopy": true, 00:19:14.253 "get_zone_info": false, 00:19:14.253 "zone_management": false, 00:19:14.253 "zone_append": false, 00:19:14.253 "compare": false, 00:19:14.253 "compare_and_write": false, 00:19:14.253 "abort": true, 00:19:14.253 "seek_hole": false, 00:19:14.253 "seek_data": false, 00:19:14.253 "copy": true, 00:19:14.253 "nvme_iov_md": false 00:19:14.253 }, 00:19:14.253 "memory_domains": [ 00:19:14.253 { 00:19:14.253 "dma_device_id": "system", 00:19:14.253 "dma_device_type": 1 00:19:14.253 }, 00:19:14.253 { 00:19:14.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.253 "dma_device_type": 2 00:19:14.253 } 00:19:14.253 ], 00:19:14.253 "driver_specific": {} 00:19:14.253 } 00:19:14.253 ] 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.254 "name": "Existed_Raid", 00:19:14.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.254 "strip_size_kb": 64, 00:19:14.254 "state": "configuring", 00:19:14.254 "raid_level": "raid5f", 00:19:14.254 "superblock": false, 00:19:14.254 "num_base_bdevs": 3, 00:19:14.254 "num_base_bdevs_discovered": 2, 00:19:14.254 "num_base_bdevs_operational": 3, 00:19:14.254 "base_bdevs_list": [ 00:19:14.254 { 00:19:14.254 "name": "BaseBdev1", 00:19:14.254 "uuid": "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2", 00:19:14.254 "is_configured": true, 00:19:14.254 "data_offset": 0, 00:19:14.254 "data_size": 65536 00:19:14.254 }, 00:19:14.254 { 00:19:14.254 "name": "BaseBdev2", 00:19:14.254 "uuid": "40cf2cd9-a4bb-46e7-b359-331789cc356b", 00:19:14.254 "is_configured": true, 00:19:14.254 "data_offset": 0, 00:19:14.254 "data_size": 65536 00:19:14.254 }, 00:19:14.254 { 00:19:14.254 "name": "BaseBdev3", 00:19:14.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.254 "is_configured": false, 00:19:14.254 "data_offset": 0, 00:19:14.254 "data_size": 0 00:19:14.254 } 00:19:14.254 ] 00:19:14.254 }' 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.254 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.821 [2024-11-27 14:18:45.164325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.821 [2024-11-27 14:18:45.164433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:14.821 [2024-11-27 14:18:45.164456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:14.821 [2024-11-27 14:18:45.164959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:14.821 [2024-11-27 14:18:45.170719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:14.821 [2024-11-27 14:18:45.170752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:14.821 [2024-11-27 14:18:45.171158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.821 BaseBdev3 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.821 [ 00:19:14.821 { 00:19:14.821 "name": "BaseBdev3", 00:19:14.821 "aliases": [ 00:19:14.821 "ab5d5c3e-4dfd-44ab-8f84-8b43c13bedeb" 00:19:14.821 ], 00:19:14.821 "product_name": "Malloc disk", 00:19:14.821 "block_size": 512, 00:19:14.821 "num_blocks": 65536, 00:19:14.821 "uuid": "ab5d5c3e-4dfd-44ab-8f84-8b43c13bedeb", 00:19:14.821 "assigned_rate_limits": { 00:19:14.821 "rw_ios_per_sec": 0, 00:19:14.821 "rw_mbytes_per_sec": 0, 00:19:14.821 "r_mbytes_per_sec": 0, 00:19:14.821 "w_mbytes_per_sec": 0 00:19:14.821 }, 00:19:14.821 "claimed": true, 00:19:14.821 "claim_type": "exclusive_write", 00:19:14.821 "zoned": false, 00:19:14.821 "supported_io_types": { 00:19:14.821 "read": true, 00:19:14.821 "write": true, 00:19:14.821 "unmap": true, 00:19:14.821 "flush": true, 00:19:14.821 "reset": true, 00:19:14.821 "nvme_admin": false, 00:19:14.821 "nvme_io": false, 00:19:14.821 "nvme_io_md": false, 00:19:14.821 "write_zeroes": true, 00:19:14.821 "zcopy": true, 00:19:14.821 "get_zone_info": false, 00:19:14.821 "zone_management": false, 00:19:14.821 "zone_append": false, 00:19:14.821 "compare": false, 00:19:14.821 "compare_and_write": false, 00:19:14.821 "abort": true, 00:19:14.821 "seek_hole": false, 00:19:14.821 "seek_data": false, 00:19:14.821 "copy": true, 00:19:14.821 "nvme_iov_md": false 00:19:14.821 }, 00:19:14.821 "memory_domains": [ 00:19:14.821 { 00:19:14.821 "dma_device_id": "system", 00:19:14.821 "dma_device_type": 1 00:19:14.821 }, 00:19:14.821 { 00:19:14.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.821 "dma_device_type": 2 00:19:14.821 } 00:19:14.821 ], 00:19:14.821 "driver_specific": {} 00:19:14.821 } 00:19:14.821 ] 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.821 "name": "Existed_Raid", 00:19:14.821 "uuid": "080a163b-c71b-4c28-a5b0-8f01a5725453", 00:19:14.821 "strip_size_kb": 64, 00:19:14.821 "state": "online", 00:19:14.821 "raid_level": "raid5f", 00:19:14.821 "superblock": false, 00:19:14.821 "num_base_bdevs": 3, 00:19:14.821 "num_base_bdevs_discovered": 3, 00:19:14.821 "num_base_bdevs_operational": 3, 00:19:14.821 "base_bdevs_list": [ 00:19:14.821 { 00:19:14.821 "name": "BaseBdev1", 00:19:14.821 "uuid": "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2", 00:19:14.821 "is_configured": true, 00:19:14.821 "data_offset": 0, 00:19:14.821 "data_size": 65536 00:19:14.821 }, 00:19:14.821 { 00:19:14.821 "name": "BaseBdev2", 00:19:14.821 "uuid": "40cf2cd9-a4bb-46e7-b359-331789cc356b", 00:19:14.821 "is_configured": true, 00:19:14.821 "data_offset": 0, 00:19:14.821 "data_size": 65536 00:19:14.821 }, 00:19:14.821 { 00:19:14.821 "name": "BaseBdev3", 00:19:14.821 "uuid": "ab5d5c3e-4dfd-44ab-8f84-8b43c13bedeb", 00:19:14.821 "is_configured": true, 00:19:14.821 "data_offset": 0, 00:19:14.821 "data_size": 65536 00:19:14.821 } 00:19:14.821 ] 00:19:14.821 }' 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.821 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:15.389 [2024-11-27 14:18:45.721218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.389 "name": "Existed_Raid", 00:19:15.389 "aliases": [ 00:19:15.389 "080a163b-c71b-4c28-a5b0-8f01a5725453" 00:19:15.389 ], 00:19:15.389 "product_name": "Raid Volume", 00:19:15.389 "block_size": 512, 00:19:15.389 "num_blocks": 131072, 00:19:15.389 "uuid": "080a163b-c71b-4c28-a5b0-8f01a5725453", 00:19:15.389 "assigned_rate_limits": { 00:19:15.389 "rw_ios_per_sec": 0, 00:19:15.389 "rw_mbytes_per_sec": 0, 00:19:15.389 "r_mbytes_per_sec": 0, 00:19:15.389 "w_mbytes_per_sec": 0 00:19:15.389 }, 00:19:15.389 "claimed": false, 00:19:15.389 "zoned": false, 00:19:15.389 "supported_io_types": { 00:19:15.389 "read": true, 00:19:15.389 "write": true, 00:19:15.389 "unmap": false, 00:19:15.389 "flush": false, 00:19:15.389 "reset": true, 00:19:15.389 "nvme_admin": false, 00:19:15.389 "nvme_io": false, 00:19:15.389 "nvme_io_md": false, 00:19:15.389 "write_zeroes": true, 00:19:15.389 "zcopy": false, 00:19:15.389 "get_zone_info": false, 00:19:15.389 "zone_management": false, 00:19:15.389 "zone_append": false, 00:19:15.389 "compare": false, 00:19:15.389 "compare_and_write": false, 00:19:15.389 "abort": false, 00:19:15.389 "seek_hole": false, 00:19:15.389 "seek_data": false, 00:19:15.389 "copy": false, 00:19:15.389 "nvme_iov_md": false 00:19:15.389 }, 00:19:15.389 "driver_specific": { 00:19:15.389 "raid": { 00:19:15.389 "uuid": "080a163b-c71b-4c28-a5b0-8f01a5725453", 00:19:15.389 "strip_size_kb": 64, 00:19:15.389 "state": "online", 00:19:15.389 "raid_level": "raid5f", 00:19:15.389 "superblock": false, 00:19:15.389 "num_base_bdevs": 3, 00:19:15.389 "num_base_bdevs_discovered": 3, 00:19:15.389 "num_base_bdevs_operational": 3, 00:19:15.389 "base_bdevs_list": [ 00:19:15.389 { 00:19:15.389 "name": "BaseBdev1", 00:19:15.389 "uuid": "aea27f7b-8acf-4cb3-a20a-9381f70ec2c2", 00:19:15.389 "is_configured": true, 00:19:15.389 "data_offset": 0, 00:19:15.389 "data_size": 65536 00:19:15.389 }, 00:19:15.389 { 00:19:15.389 "name": "BaseBdev2", 00:19:15.389 "uuid": "40cf2cd9-a4bb-46e7-b359-331789cc356b", 00:19:15.389 "is_configured": true, 00:19:15.389 "data_offset": 0, 00:19:15.389 "data_size": 65536 00:19:15.389 }, 00:19:15.389 { 00:19:15.389 "name": "BaseBdev3", 00:19:15.389 "uuid": "ab5d5c3e-4dfd-44ab-8f84-8b43c13bedeb", 00:19:15.389 "is_configured": true, 00:19:15.389 "data_offset": 0, 00:19:15.389 "data_size": 65536 00:19:15.389 } 00:19:15.389 ] 00:19:15.389 } 00:19:15.389 } 00:19:15.389 }' 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:15.389 BaseBdev2 00:19:15.389 BaseBdev3' 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.389 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.663 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.663 [2024-11-27 14:18:46.057140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.663 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.922 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.922 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.922 "name": "Existed_Raid", 00:19:15.922 "uuid": "080a163b-c71b-4c28-a5b0-8f01a5725453", 00:19:15.923 "strip_size_kb": 64, 00:19:15.923 "state": "online", 00:19:15.923 "raid_level": "raid5f", 00:19:15.923 "superblock": false, 00:19:15.923 "num_base_bdevs": 3, 00:19:15.923 "num_base_bdevs_discovered": 2, 00:19:15.923 "num_base_bdevs_operational": 2, 00:19:15.923 "base_bdevs_list": [ 00:19:15.923 { 00:19:15.923 "name": null, 00:19:15.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.923 "is_configured": false, 00:19:15.923 "data_offset": 0, 00:19:15.923 "data_size": 65536 00:19:15.923 }, 00:19:15.923 { 00:19:15.923 "name": "BaseBdev2", 00:19:15.923 "uuid": "40cf2cd9-a4bb-46e7-b359-331789cc356b", 00:19:15.923 "is_configured": true, 00:19:15.923 "data_offset": 0, 00:19:15.923 "data_size": 65536 00:19:15.923 }, 00:19:15.923 { 00:19:15.923 "name": "BaseBdev3", 00:19:15.923 "uuid": "ab5d5c3e-4dfd-44ab-8f84-8b43c13bedeb", 00:19:15.923 "is_configured": true, 00:19:15.923 "data_offset": 0, 00:19:15.923 "data_size": 65536 00:19:15.923 } 00:19:15.923 ] 00:19:15.923 }' 00:19:15.923 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.923 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.182 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.442 [2024-11-27 14:18:46.729015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:16.442 [2024-11-27 14:18:46.729154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.442 [2024-11-27 14:18:46.823720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.442 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.442 [2024-11-27 14:18:46.879804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:16.442 [2024-11-27 14:18:46.879903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 BaseBdev2 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 [ 00:19:16.701 { 00:19:16.701 "name": "BaseBdev2", 00:19:16.701 "aliases": [ 00:19:16.701 "44c87023-69e3-4855-94e8-f764baef2e78" 00:19:16.701 ], 00:19:16.701 "product_name": "Malloc disk", 00:19:16.701 "block_size": 512, 00:19:16.701 "num_blocks": 65536, 00:19:16.701 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:16.701 "assigned_rate_limits": { 00:19:16.701 "rw_ios_per_sec": 0, 00:19:16.701 "rw_mbytes_per_sec": 0, 00:19:16.701 "r_mbytes_per_sec": 0, 00:19:16.701 "w_mbytes_per_sec": 0 00:19:16.701 }, 00:19:16.701 "claimed": false, 00:19:16.701 "zoned": false, 00:19:16.701 "supported_io_types": { 00:19:16.701 "read": true, 00:19:16.702 "write": true, 00:19:16.702 "unmap": true, 00:19:16.702 "flush": true, 00:19:16.702 "reset": true, 00:19:16.702 "nvme_admin": false, 00:19:16.702 "nvme_io": false, 00:19:16.702 "nvme_io_md": false, 00:19:16.702 "write_zeroes": true, 00:19:16.702 "zcopy": true, 00:19:16.702 "get_zone_info": false, 00:19:16.702 "zone_management": false, 00:19:16.702 "zone_append": false, 00:19:16.702 "compare": false, 00:19:16.702 "compare_and_write": false, 00:19:16.702 "abort": true, 00:19:16.702 "seek_hole": false, 00:19:16.702 "seek_data": false, 00:19:16.702 "copy": true, 00:19:16.702 "nvme_iov_md": false 00:19:16.702 }, 00:19:16.702 "memory_domains": [ 00:19:16.702 { 00:19:16.702 "dma_device_id": "system", 00:19:16.702 "dma_device_type": 1 00:19:16.702 }, 00:19:16.702 { 00:19:16.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.702 "dma_device_type": 2 00:19:16.702 } 00:19:16.702 ], 00:19:16.702 "driver_specific": {} 00:19:16.702 } 00:19:16.702 ] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.702 BaseBdev3 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.702 [ 00:19:16.702 { 00:19:16.702 "name": "BaseBdev3", 00:19:16.702 "aliases": [ 00:19:16.702 "18aeb704-5aa0-4a58-aec7-ca44ca3347ce" 00:19:16.702 ], 00:19:16.702 "product_name": "Malloc disk", 00:19:16.702 "block_size": 512, 00:19:16.702 "num_blocks": 65536, 00:19:16.702 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:16.702 "assigned_rate_limits": { 00:19:16.702 "rw_ios_per_sec": 0, 00:19:16.702 "rw_mbytes_per_sec": 0, 00:19:16.702 "r_mbytes_per_sec": 0, 00:19:16.702 "w_mbytes_per_sec": 0 00:19:16.702 }, 00:19:16.702 "claimed": false, 00:19:16.702 "zoned": false, 00:19:16.702 "supported_io_types": { 00:19:16.702 "read": true, 00:19:16.702 "write": true, 00:19:16.702 "unmap": true, 00:19:16.702 "flush": true, 00:19:16.702 "reset": true, 00:19:16.702 "nvme_admin": false, 00:19:16.702 "nvme_io": false, 00:19:16.702 "nvme_io_md": false, 00:19:16.702 "write_zeroes": true, 00:19:16.702 "zcopy": true, 00:19:16.702 "get_zone_info": false, 00:19:16.702 "zone_management": false, 00:19:16.702 "zone_append": false, 00:19:16.702 "compare": false, 00:19:16.702 "compare_and_write": false, 00:19:16.702 "abort": true, 00:19:16.702 "seek_hole": false, 00:19:16.702 "seek_data": false, 00:19:16.702 "copy": true, 00:19:16.702 "nvme_iov_md": false 00:19:16.702 }, 00:19:16.702 "memory_domains": [ 00:19:16.702 { 00:19:16.702 "dma_device_id": "system", 00:19:16.702 "dma_device_type": 1 00:19:16.702 }, 00:19:16.702 { 00:19:16.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.702 "dma_device_type": 2 00:19:16.702 } 00:19:16.702 ], 00:19:16.702 "driver_specific": {} 00:19:16.702 } 00:19:16.702 ] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.702 [2024-11-27 14:18:47.191464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:16.702 [2024-11-27 14:18:47.191521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:16.702 [2024-11-27 14:18:47.191555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.702 [2024-11-27 14:18:47.194024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.702 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.962 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.962 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.962 "name": "Existed_Raid", 00:19:16.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.962 "strip_size_kb": 64, 00:19:16.962 "state": "configuring", 00:19:16.962 "raid_level": "raid5f", 00:19:16.962 "superblock": false, 00:19:16.962 "num_base_bdevs": 3, 00:19:16.962 "num_base_bdevs_discovered": 2, 00:19:16.962 "num_base_bdevs_operational": 3, 00:19:16.962 "base_bdevs_list": [ 00:19:16.962 { 00:19:16.962 "name": "BaseBdev1", 00:19:16.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.962 "is_configured": false, 00:19:16.962 "data_offset": 0, 00:19:16.962 "data_size": 0 00:19:16.962 }, 00:19:16.962 { 00:19:16.962 "name": "BaseBdev2", 00:19:16.962 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:16.962 "is_configured": true, 00:19:16.962 "data_offset": 0, 00:19:16.962 "data_size": 65536 00:19:16.962 }, 00:19:16.962 { 00:19:16.962 "name": "BaseBdev3", 00:19:16.962 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:16.962 "is_configured": true, 00:19:16.962 "data_offset": 0, 00:19:16.962 "data_size": 65536 00:19:16.962 } 00:19:16.962 ] 00:19:16.962 }' 00:19:16.962 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.962 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.221 [2024-11-27 14:18:47.715878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.221 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.480 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.480 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.480 "name": "Existed_Raid", 00:19:17.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.480 "strip_size_kb": 64, 00:19:17.480 "state": "configuring", 00:19:17.480 "raid_level": "raid5f", 00:19:17.480 "superblock": false, 00:19:17.480 "num_base_bdevs": 3, 00:19:17.480 "num_base_bdevs_discovered": 1, 00:19:17.480 "num_base_bdevs_operational": 3, 00:19:17.480 "base_bdevs_list": [ 00:19:17.480 { 00:19:17.480 "name": "BaseBdev1", 00:19:17.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.480 "is_configured": false, 00:19:17.480 "data_offset": 0, 00:19:17.480 "data_size": 0 00:19:17.480 }, 00:19:17.480 { 00:19:17.480 "name": null, 00:19:17.480 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:17.480 "is_configured": false, 00:19:17.480 "data_offset": 0, 00:19:17.480 "data_size": 65536 00:19:17.480 }, 00:19:17.480 { 00:19:17.480 "name": "BaseBdev3", 00:19:17.480 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:17.480 "is_configured": true, 00:19:17.480 "data_offset": 0, 00:19:17.480 "data_size": 65536 00:19:17.480 } 00:19:17.480 ] 00:19:17.480 }' 00:19:17.480 14:18:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.480 14:18:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 [2024-11-27 14:18:48.355380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.047 BaseBdev1 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 [ 00:19:18.047 { 00:19:18.047 "name": "BaseBdev1", 00:19:18.047 "aliases": [ 00:19:18.047 "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6" 00:19:18.047 ], 00:19:18.047 "product_name": "Malloc disk", 00:19:18.047 "block_size": 512, 00:19:18.047 "num_blocks": 65536, 00:19:18.047 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:18.047 "assigned_rate_limits": { 00:19:18.047 "rw_ios_per_sec": 0, 00:19:18.047 "rw_mbytes_per_sec": 0, 00:19:18.047 "r_mbytes_per_sec": 0, 00:19:18.047 "w_mbytes_per_sec": 0 00:19:18.047 }, 00:19:18.047 "claimed": true, 00:19:18.047 "claim_type": "exclusive_write", 00:19:18.047 "zoned": false, 00:19:18.047 "supported_io_types": { 00:19:18.047 "read": true, 00:19:18.047 "write": true, 00:19:18.047 "unmap": true, 00:19:18.047 "flush": true, 00:19:18.047 "reset": true, 00:19:18.047 "nvme_admin": false, 00:19:18.047 "nvme_io": false, 00:19:18.047 "nvme_io_md": false, 00:19:18.047 "write_zeroes": true, 00:19:18.047 "zcopy": true, 00:19:18.047 "get_zone_info": false, 00:19:18.047 "zone_management": false, 00:19:18.047 "zone_append": false, 00:19:18.047 "compare": false, 00:19:18.047 "compare_and_write": false, 00:19:18.047 "abort": true, 00:19:18.047 "seek_hole": false, 00:19:18.047 "seek_data": false, 00:19:18.047 "copy": true, 00:19:18.047 "nvme_iov_md": false 00:19:18.047 }, 00:19:18.047 "memory_domains": [ 00:19:18.047 { 00:19:18.047 "dma_device_id": "system", 00:19:18.047 "dma_device_type": 1 00:19:18.047 }, 00:19:18.047 { 00:19:18.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.047 "dma_device_type": 2 00:19:18.047 } 00:19:18.047 ], 00:19:18.047 "driver_specific": {} 00:19:18.047 } 00:19:18.047 ] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.047 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.047 "name": "Existed_Raid", 00:19:18.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.047 "strip_size_kb": 64, 00:19:18.047 "state": "configuring", 00:19:18.047 "raid_level": "raid5f", 00:19:18.047 "superblock": false, 00:19:18.047 "num_base_bdevs": 3, 00:19:18.047 "num_base_bdevs_discovered": 2, 00:19:18.047 "num_base_bdevs_operational": 3, 00:19:18.048 "base_bdevs_list": [ 00:19:18.048 { 00:19:18.048 "name": "BaseBdev1", 00:19:18.048 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:18.048 "is_configured": true, 00:19:18.048 "data_offset": 0, 00:19:18.048 "data_size": 65536 00:19:18.048 }, 00:19:18.048 { 00:19:18.048 "name": null, 00:19:18.048 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:18.048 "is_configured": false, 00:19:18.048 "data_offset": 0, 00:19:18.048 "data_size": 65536 00:19:18.048 }, 00:19:18.048 { 00:19:18.048 "name": "BaseBdev3", 00:19:18.048 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:18.048 "is_configured": true, 00:19:18.048 "data_offset": 0, 00:19:18.048 "data_size": 65536 00:19:18.048 } 00:19:18.048 ] 00:19:18.048 }' 00:19:18.048 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.048 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.616 [2024-11-27 14:18:48.923671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.616 "name": "Existed_Raid", 00:19:18.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.616 "strip_size_kb": 64, 00:19:18.616 "state": "configuring", 00:19:18.616 "raid_level": "raid5f", 00:19:18.616 "superblock": false, 00:19:18.616 "num_base_bdevs": 3, 00:19:18.616 "num_base_bdevs_discovered": 1, 00:19:18.616 "num_base_bdevs_operational": 3, 00:19:18.616 "base_bdevs_list": [ 00:19:18.616 { 00:19:18.616 "name": "BaseBdev1", 00:19:18.616 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:18.616 "is_configured": true, 00:19:18.616 "data_offset": 0, 00:19:18.616 "data_size": 65536 00:19:18.616 }, 00:19:18.616 { 00:19:18.616 "name": null, 00:19:18.616 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:18.616 "is_configured": false, 00:19:18.616 "data_offset": 0, 00:19:18.616 "data_size": 65536 00:19:18.616 }, 00:19:18.616 { 00:19:18.616 "name": null, 00:19:18.616 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:18.616 "is_configured": false, 00:19:18.616 "data_offset": 0, 00:19:18.616 "data_size": 65536 00:19:18.616 } 00:19:18.616 ] 00:19:18.616 }' 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.616 14:18:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 [2024-11-27 14:18:49.512022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.181 "name": "Existed_Raid", 00:19:19.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.181 "strip_size_kb": 64, 00:19:19.181 "state": "configuring", 00:19:19.181 "raid_level": "raid5f", 00:19:19.181 "superblock": false, 00:19:19.181 "num_base_bdevs": 3, 00:19:19.181 "num_base_bdevs_discovered": 2, 00:19:19.181 "num_base_bdevs_operational": 3, 00:19:19.181 "base_bdevs_list": [ 00:19:19.181 { 00:19:19.181 "name": "BaseBdev1", 00:19:19.181 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:19.181 "is_configured": true, 00:19:19.181 "data_offset": 0, 00:19:19.181 "data_size": 65536 00:19:19.181 }, 00:19:19.181 { 00:19:19.181 "name": null, 00:19:19.181 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:19.181 "is_configured": false, 00:19:19.181 "data_offset": 0, 00:19:19.181 "data_size": 65536 00:19:19.181 }, 00:19:19.181 { 00:19:19.181 "name": "BaseBdev3", 00:19:19.181 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:19.181 "is_configured": true, 00:19:19.181 "data_offset": 0, 00:19:19.181 "data_size": 65536 00:19:19.181 } 00:19:19.181 ] 00:19:19.181 }' 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.181 14:18:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 [2024-11-27 14:18:50.072123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.766 "name": "Existed_Raid", 00:19:19.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.766 "strip_size_kb": 64, 00:19:19.766 "state": "configuring", 00:19:19.766 "raid_level": "raid5f", 00:19:19.766 "superblock": false, 00:19:19.766 "num_base_bdevs": 3, 00:19:19.766 "num_base_bdevs_discovered": 1, 00:19:19.766 "num_base_bdevs_operational": 3, 00:19:19.766 "base_bdevs_list": [ 00:19:19.766 { 00:19:19.766 "name": null, 00:19:19.766 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:19.766 "is_configured": false, 00:19:19.766 "data_offset": 0, 00:19:19.766 "data_size": 65536 00:19:19.766 }, 00:19:19.766 { 00:19:19.766 "name": null, 00:19:19.766 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:19.766 "is_configured": false, 00:19:19.766 "data_offset": 0, 00:19:19.766 "data_size": 65536 00:19:19.766 }, 00:19:19.766 { 00:19:19.766 "name": "BaseBdev3", 00:19:19.766 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:19.766 "is_configured": true, 00:19:19.766 "data_offset": 0, 00:19:19.766 "data_size": 65536 00:19:19.766 } 00:19:19.766 ] 00:19:19.766 }' 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.766 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.333 [2024-11-27 14:18:50.799582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.333 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.592 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.592 "name": "Existed_Raid", 00:19:20.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.592 "strip_size_kb": 64, 00:19:20.592 "state": "configuring", 00:19:20.592 "raid_level": "raid5f", 00:19:20.592 "superblock": false, 00:19:20.592 "num_base_bdevs": 3, 00:19:20.592 "num_base_bdevs_discovered": 2, 00:19:20.592 "num_base_bdevs_operational": 3, 00:19:20.592 "base_bdevs_list": [ 00:19:20.592 { 00:19:20.592 "name": null, 00:19:20.592 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:20.592 "is_configured": false, 00:19:20.592 "data_offset": 0, 00:19:20.592 "data_size": 65536 00:19:20.592 }, 00:19:20.592 { 00:19:20.592 "name": "BaseBdev2", 00:19:20.592 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:20.592 "is_configured": true, 00:19:20.592 "data_offset": 0, 00:19:20.592 "data_size": 65536 00:19:20.592 }, 00:19:20.592 { 00:19:20.592 "name": "BaseBdev3", 00:19:20.592 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:20.592 "is_configured": true, 00:19:20.592 "data_offset": 0, 00:19:20.592 "data_size": 65536 00:19:20.592 } 00:19:20.592 ] 00:19:20.592 }' 00:19:20.592 14:18:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.592 14:18:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.850 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.850 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:20.850 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.850 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.850 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6d274bd-c058-4a1e-bd8d-dfb145c89ce6 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 [2024-11-27 14:18:51.488235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:21.112 [2024-11-27 14:18:51.488316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:21.112 [2024-11-27 14:18:51.488334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:21.112 [2024-11-27 14:18:51.488695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:21.112 [2024-11-27 14:18:51.493685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:21.112 [2024-11-27 14:18:51.493716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:21.112 [2024-11-27 14:18:51.494089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.112 NewBaseBdev 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 [ 00:19:21.112 { 00:19:21.112 "name": "NewBaseBdev", 00:19:21.112 "aliases": [ 00:19:21.112 "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6" 00:19:21.112 ], 00:19:21.112 "product_name": "Malloc disk", 00:19:21.112 "block_size": 512, 00:19:21.112 "num_blocks": 65536, 00:19:21.112 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:21.112 "assigned_rate_limits": { 00:19:21.112 "rw_ios_per_sec": 0, 00:19:21.112 "rw_mbytes_per_sec": 0, 00:19:21.112 "r_mbytes_per_sec": 0, 00:19:21.112 "w_mbytes_per_sec": 0 00:19:21.112 }, 00:19:21.112 "claimed": true, 00:19:21.112 "claim_type": "exclusive_write", 00:19:21.112 "zoned": false, 00:19:21.112 "supported_io_types": { 00:19:21.112 "read": true, 00:19:21.112 "write": true, 00:19:21.112 "unmap": true, 00:19:21.112 "flush": true, 00:19:21.112 "reset": true, 00:19:21.112 "nvme_admin": false, 00:19:21.112 "nvme_io": false, 00:19:21.112 "nvme_io_md": false, 00:19:21.112 "write_zeroes": true, 00:19:21.112 "zcopy": true, 00:19:21.112 "get_zone_info": false, 00:19:21.112 "zone_management": false, 00:19:21.112 "zone_append": false, 00:19:21.112 "compare": false, 00:19:21.112 "compare_and_write": false, 00:19:21.112 "abort": true, 00:19:21.112 "seek_hole": false, 00:19:21.112 "seek_data": false, 00:19:21.112 "copy": true, 00:19:21.112 "nvme_iov_md": false 00:19:21.112 }, 00:19:21.112 "memory_domains": [ 00:19:21.112 { 00:19:21.112 "dma_device_id": "system", 00:19:21.112 "dma_device_type": 1 00:19:21.112 }, 00:19:21.112 { 00:19:21.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.112 "dma_device_type": 2 00:19:21.112 } 00:19:21.112 ], 00:19:21.112 "driver_specific": {} 00:19:21.112 } 00:19:21.112 ] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.112 "name": "Existed_Raid", 00:19:21.112 "uuid": "02fb138e-ef54-481e-a5d4-a5d9a8ee5b3a", 00:19:21.112 "strip_size_kb": 64, 00:19:21.112 "state": "online", 00:19:21.112 "raid_level": "raid5f", 00:19:21.112 "superblock": false, 00:19:21.112 "num_base_bdevs": 3, 00:19:21.112 "num_base_bdevs_discovered": 3, 00:19:21.112 "num_base_bdevs_operational": 3, 00:19:21.112 "base_bdevs_list": [ 00:19:21.112 { 00:19:21.112 "name": "NewBaseBdev", 00:19:21.112 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:21.112 "is_configured": true, 00:19:21.112 "data_offset": 0, 00:19:21.112 "data_size": 65536 00:19:21.112 }, 00:19:21.112 { 00:19:21.112 "name": "BaseBdev2", 00:19:21.112 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:21.112 "is_configured": true, 00:19:21.112 "data_offset": 0, 00:19:21.112 "data_size": 65536 00:19:21.112 }, 00:19:21.112 { 00:19:21.112 "name": "BaseBdev3", 00:19:21.112 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:21.112 "is_configured": true, 00:19:21.112 "data_offset": 0, 00:19:21.112 "data_size": 65536 00:19:21.112 } 00:19:21.112 ] 00:19:21.112 }' 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.112 14:18:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.678 [2024-11-27 14:18:52.104354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.678 "name": "Existed_Raid", 00:19:21.678 "aliases": [ 00:19:21.678 "02fb138e-ef54-481e-a5d4-a5d9a8ee5b3a" 00:19:21.678 ], 00:19:21.678 "product_name": "Raid Volume", 00:19:21.678 "block_size": 512, 00:19:21.678 "num_blocks": 131072, 00:19:21.678 "uuid": "02fb138e-ef54-481e-a5d4-a5d9a8ee5b3a", 00:19:21.678 "assigned_rate_limits": { 00:19:21.678 "rw_ios_per_sec": 0, 00:19:21.678 "rw_mbytes_per_sec": 0, 00:19:21.678 "r_mbytes_per_sec": 0, 00:19:21.678 "w_mbytes_per_sec": 0 00:19:21.678 }, 00:19:21.678 "claimed": false, 00:19:21.678 "zoned": false, 00:19:21.678 "supported_io_types": { 00:19:21.678 "read": true, 00:19:21.678 "write": true, 00:19:21.678 "unmap": false, 00:19:21.678 "flush": false, 00:19:21.678 "reset": true, 00:19:21.678 "nvme_admin": false, 00:19:21.678 "nvme_io": false, 00:19:21.678 "nvme_io_md": false, 00:19:21.678 "write_zeroes": true, 00:19:21.678 "zcopy": false, 00:19:21.678 "get_zone_info": false, 00:19:21.678 "zone_management": false, 00:19:21.678 "zone_append": false, 00:19:21.678 "compare": false, 00:19:21.678 "compare_and_write": false, 00:19:21.678 "abort": false, 00:19:21.678 "seek_hole": false, 00:19:21.678 "seek_data": false, 00:19:21.678 "copy": false, 00:19:21.678 "nvme_iov_md": false 00:19:21.678 }, 00:19:21.678 "driver_specific": { 00:19:21.678 "raid": { 00:19:21.678 "uuid": "02fb138e-ef54-481e-a5d4-a5d9a8ee5b3a", 00:19:21.678 "strip_size_kb": 64, 00:19:21.678 "state": "online", 00:19:21.678 "raid_level": "raid5f", 00:19:21.678 "superblock": false, 00:19:21.678 "num_base_bdevs": 3, 00:19:21.678 "num_base_bdevs_discovered": 3, 00:19:21.678 "num_base_bdevs_operational": 3, 00:19:21.678 "base_bdevs_list": [ 00:19:21.678 { 00:19:21.678 "name": "NewBaseBdev", 00:19:21.678 "uuid": "c6d274bd-c058-4a1e-bd8d-dfb145c89ce6", 00:19:21.678 "is_configured": true, 00:19:21.678 "data_offset": 0, 00:19:21.678 "data_size": 65536 00:19:21.678 }, 00:19:21.678 { 00:19:21.678 "name": "BaseBdev2", 00:19:21.678 "uuid": "44c87023-69e3-4855-94e8-f764baef2e78", 00:19:21.678 "is_configured": true, 00:19:21.678 "data_offset": 0, 00:19:21.678 "data_size": 65536 00:19:21.678 }, 00:19:21.678 { 00:19:21.678 "name": "BaseBdev3", 00:19:21.678 "uuid": "18aeb704-5aa0-4a58-aec7-ca44ca3347ce", 00:19:21.678 "is_configured": true, 00:19:21.678 "data_offset": 0, 00:19:21.678 "data_size": 65536 00:19:21.678 } 00:19:21.678 ] 00:19:21.678 } 00:19:21.678 } 00:19:21.678 }' 00:19:21.678 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:21.937 BaseBdev2 00:19:21.937 BaseBdev3' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.937 [2024-11-27 14:18:52.424108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:21.937 [2024-11-27 14:18:52.424152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.937 [2024-11-27 14:18:52.424247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.937 [2024-11-27 14:18:52.424604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.937 [2024-11-27 14:18:52.424643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80449 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80449 ']' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80449 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.937 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80449 00:19:22.196 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.196 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.196 killing process with pid 80449 00:19:22.196 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80449' 00:19:22.196 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80449 00:19:22.196 [2024-11-27 14:18:52.459191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.196 14:18:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80449 00:19:22.454 [2024-11-27 14:18:52.750759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.393 14:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:23.393 00:19:23.393 real 0m12.223s 00:19:23.393 user 0m20.168s 00:19:23.393 sys 0m1.765s 00:19:23.393 14:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.393 14:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.393 ************************************ 00:19:23.393 END TEST raid5f_state_function_test 00:19:23.393 ************************************ 00:19:23.393 14:18:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:19:23.393 14:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:23.393 14:18:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.393 14:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.651 ************************************ 00:19:23.651 START TEST raid5f_state_function_test_sb 00:19:23.652 ************************************ 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81088 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81088' 00:19:23.652 Process raid pid: 81088 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81088 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81088 ']' 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.652 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.652 [2024-11-27 14:18:54.034798] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:19:23.652 [2024-11-27 14:18:54.035688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.977 [2024-11-27 14:18:54.230686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.262 [2024-11-27 14:18:54.449047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.262 [2024-11-27 14:18:54.663525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.262 [2024-11-27 14:18:54.663592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.831 [2024-11-27 14:18:55.064310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.831 [2024-11-27 14:18:55.064404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.831 [2024-11-27 14:18:55.064421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.831 [2024-11-27 14:18:55.064438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.831 [2024-11-27 14:18:55.064448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.831 [2024-11-27 14:18:55.064462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.831 "name": "Existed_Raid", 00:19:24.831 "uuid": "7594a533-1ab3-41d9-ba66-5c16c90b2f36", 00:19:24.831 "strip_size_kb": 64, 00:19:24.831 "state": "configuring", 00:19:24.831 "raid_level": "raid5f", 00:19:24.831 "superblock": true, 00:19:24.831 "num_base_bdevs": 3, 00:19:24.831 "num_base_bdevs_discovered": 0, 00:19:24.831 "num_base_bdevs_operational": 3, 00:19:24.831 "base_bdevs_list": [ 00:19:24.831 { 00:19:24.831 "name": "BaseBdev1", 00:19:24.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.831 "is_configured": false, 00:19:24.831 "data_offset": 0, 00:19:24.831 "data_size": 0 00:19:24.831 }, 00:19:24.831 { 00:19:24.831 "name": "BaseBdev2", 00:19:24.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.831 "is_configured": false, 00:19:24.831 "data_offset": 0, 00:19:24.831 "data_size": 0 00:19:24.831 }, 00:19:24.831 { 00:19:24.831 "name": "BaseBdev3", 00:19:24.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.831 "is_configured": false, 00:19:24.831 "data_offset": 0, 00:19:24.831 "data_size": 0 00:19:24.831 } 00:19:24.831 ] 00:19:24.831 }' 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.831 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 [2024-11-27 14:18:55.548532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.090 [2024-11-27 14:18:55.548578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 [2024-11-27 14:18:55.556527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.090 [2024-11-27 14:18:55.556596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.090 [2024-11-27 14:18:55.556628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.090 [2024-11-27 14:18:55.556644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.090 [2024-11-27 14:18:55.556653] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:25.090 [2024-11-27 14:18:55.556668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.090 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.349 [2024-11-27 14:18:55.604488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.349 BaseBdev1 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.349 [ 00:19:25.349 { 00:19:25.349 "name": "BaseBdev1", 00:19:25.349 "aliases": [ 00:19:25.349 "a4ab224b-fda5-4af1-b29b-62acf1eb0715" 00:19:25.349 ], 00:19:25.349 "product_name": "Malloc disk", 00:19:25.349 "block_size": 512, 00:19:25.349 "num_blocks": 65536, 00:19:25.349 "uuid": "a4ab224b-fda5-4af1-b29b-62acf1eb0715", 00:19:25.349 "assigned_rate_limits": { 00:19:25.349 "rw_ios_per_sec": 0, 00:19:25.349 "rw_mbytes_per_sec": 0, 00:19:25.349 "r_mbytes_per_sec": 0, 00:19:25.349 "w_mbytes_per_sec": 0 00:19:25.349 }, 00:19:25.349 "claimed": true, 00:19:25.349 "claim_type": "exclusive_write", 00:19:25.349 "zoned": false, 00:19:25.349 "supported_io_types": { 00:19:25.349 "read": true, 00:19:25.349 "write": true, 00:19:25.349 "unmap": true, 00:19:25.349 "flush": true, 00:19:25.349 "reset": true, 00:19:25.349 "nvme_admin": false, 00:19:25.349 "nvme_io": false, 00:19:25.349 "nvme_io_md": false, 00:19:25.349 "write_zeroes": true, 00:19:25.349 "zcopy": true, 00:19:25.349 "get_zone_info": false, 00:19:25.349 "zone_management": false, 00:19:25.349 "zone_append": false, 00:19:25.349 "compare": false, 00:19:25.349 "compare_and_write": false, 00:19:25.349 "abort": true, 00:19:25.349 "seek_hole": false, 00:19:25.349 "seek_data": false, 00:19:25.349 "copy": true, 00:19:25.349 "nvme_iov_md": false 00:19:25.349 }, 00:19:25.349 "memory_domains": [ 00:19:25.349 { 00:19:25.349 "dma_device_id": "system", 00:19:25.349 "dma_device_type": 1 00:19:25.349 }, 00:19:25.349 { 00:19:25.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.349 "dma_device_type": 2 00:19:25.349 } 00:19:25.349 ], 00:19:25.349 "driver_specific": {} 00:19:25.349 } 00:19:25.349 ] 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.349 "name": "Existed_Raid", 00:19:25.349 "uuid": "0fe33049-3843-4cd2-a982-9b9aca9fd1e2", 00:19:25.349 "strip_size_kb": 64, 00:19:25.349 "state": "configuring", 00:19:25.349 "raid_level": "raid5f", 00:19:25.349 "superblock": true, 00:19:25.349 "num_base_bdevs": 3, 00:19:25.349 "num_base_bdevs_discovered": 1, 00:19:25.349 "num_base_bdevs_operational": 3, 00:19:25.349 "base_bdevs_list": [ 00:19:25.349 { 00:19:25.349 "name": "BaseBdev1", 00:19:25.349 "uuid": "a4ab224b-fda5-4af1-b29b-62acf1eb0715", 00:19:25.349 "is_configured": true, 00:19:25.349 "data_offset": 2048, 00:19:25.349 "data_size": 63488 00:19:25.349 }, 00:19:25.349 { 00:19:25.349 "name": "BaseBdev2", 00:19:25.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.349 "is_configured": false, 00:19:25.349 "data_offset": 0, 00:19:25.349 "data_size": 0 00:19:25.349 }, 00:19:25.349 { 00:19:25.349 "name": "BaseBdev3", 00:19:25.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.349 "is_configured": false, 00:19:25.349 "data_offset": 0, 00:19:25.349 "data_size": 0 00:19:25.349 } 00:19:25.349 ] 00:19:25.349 }' 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.349 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.916 [2024-11-27 14:18:56.152716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.916 [2024-11-27 14:18:56.152780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.916 [2024-11-27 14:18:56.164805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.916 [2024-11-27 14:18:56.167490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.916 [2024-11-27 14:18:56.167668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.916 [2024-11-27 14:18:56.167696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:25.916 [2024-11-27 14:18:56.167717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.916 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.917 "name": "Existed_Raid", 00:19:25.917 "uuid": "5f06b150-5d5b-408c-90f0-24119dccb560", 00:19:25.917 "strip_size_kb": 64, 00:19:25.917 "state": "configuring", 00:19:25.917 "raid_level": "raid5f", 00:19:25.917 "superblock": true, 00:19:25.917 "num_base_bdevs": 3, 00:19:25.917 "num_base_bdevs_discovered": 1, 00:19:25.917 "num_base_bdevs_operational": 3, 00:19:25.917 "base_bdevs_list": [ 00:19:25.917 { 00:19:25.917 "name": "BaseBdev1", 00:19:25.917 "uuid": "a4ab224b-fda5-4af1-b29b-62acf1eb0715", 00:19:25.917 "is_configured": true, 00:19:25.917 "data_offset": 2048, 00:19:25.917 "data_size": 63488 00:19:25.917 }, 00:19:25.917 { 00:19:25.917 "name": "BaseBdev2", 00:19:25.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.917 "is_configured": false, 00:19:25.917 "data_offset": 0, 00:19:25.917 "data_size": 0 00:19:25.917 }, 00:19:25.917 { 00:19:25.917 "name": "BaseBdev3", 00:19:25.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.917 "is_configured": false, 00:19:25.917 "data_offset": 0, 00:19:25.917 "data_size": 0 00:19:25.917 } 00:19:25.917 ] 00:19:25.917 }' 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.917 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.176 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:26.176 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.176 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.435 [2024-11-27 14:18:56.717458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.435 BaseBdev2 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.435 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.435 [ 00:19:26.435 { 00:19:26.435 "name": "BaseBdev2", 00:19:26.435 "aliases": [ 00:19:26.435 "55c6a945-f4f1-422e-b5f3-af3c29c0668d" 00:19:26.435 ], 00:19:26.435 "product_name": "Malloc disk", 00:19:26.435 "block_size": 512, 00:19:26.435 "num_blocks": 65536, 00:19:26.435 "uuid": "55c6a945-f4f1-422e-b5f3-af3c29c0668d", 00:19:26.435 "assigned_rate_limits": { 00:19:26.435 "rw_ios_per_sec": 0, 00:19:26.435 "rw_mbytes_per_sec": 0, 00:19:26.435 "r_mbytes_per_sec": 0, 00:19:26.435 "w_mbytes_per_sec": 0 00:19:26.435 }, 00:19:26.435 "claimed": true, 00:19:26.435 "claim_type": "exclusive_write", 00:19:26.435 "zoned": false, 00:19:26.435 "supported_io_types": { 00:19:26.435 "read": true, 00:19:26.435 "write": true, 00:19:26.435 "unmap": true, 00:19:26.435 "flush": true, 00:19:26.435 "reset": true, 00:19:26.435 "nvme_admin": false, 00:19:26.435 "nvme_io": false, 00:19:26.435 "nvme_io_md": false, 00:19:26.435 "write_zeroes": true, 00:19:26.435 "zcopy": true, 00:19:26.435 "get_zone_info": false, 00:19:26.435 "zone_management": false, 00:19:26.435 "zone_append": false, 00:19:26.435 "compare": false, 00:19:26.435 "compare_and_write": false, 00:19:26.435 "abort": true, 00:19:26.435 "seek_hole": false, 00:19:26.435 "seek_data": false, 00:19:26.435 "copy": true, 00:19:26.435 "nvme_iov_md": false 00:19:26.435 }, 00:19:26.435 "memory_domains": [ 00:19:26.435 { 00:19:26.435 "dma_device_id": "system", 00:19:26.435 "dma_device_type": 1 00:19:26.435 }, 00:19:26.435 { 00:19:26.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.436 "dma_device_type": 2 00:19:26.436 } 00:19:26.436 ], 00:19:26.436 "driver_specific": {} 00:19:26.436 } 00:19:26.436 ] 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.436 "name": "Existed_Raid", 00:19:26.436 "uuid": "5f06b150-5d5b-408c-90f0-24119dccb560", 00:19:26.436 "strip_size_kb": 64, 00:19:26.436 "state": "configuring", 00:19:26.436 "raid_level": "raid5f", 00:19:26.436 "superblock": true, 00:19:26.436 "num_base_bdevs": 3, 00:19:26.436 "num_base_bdevs_discovered": 2, 00:19:26.436 "num_base_bdevs_operational": 3, 00:19:26.436 "base_bdevs_list": [ 00:19:26.436 { 00:19:26.436 "name": "BaseBdev1", 00:19:26.436 "uuid": "a4ab224b-fda5-4af1-b29b-62acf1eb0715", 00:19:26.436 "is_configured": true, 00:19:26.436 "data_offset": 2048, 00:19:26.436 "data_size": 63488 00:19:26.436 }, 00:19:26.436 { 00:19:26.436 "name": "BaseBdev2", 00:19:26.436 "uuid": "55c6a945-f4f1-422e-b5f3-af3c29c0668d", 00:19:26.436 "is_configured": true, 00:19:26.436 "data_offset": 2048, 00:19:26.436 "data_size": 63488 00:19:26.436 }, 00:19:26.436 { 00:19:26.436 "name": "BaseBdev3", 00:19:26.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.436 "is_configured": false, 00:19:26.436 "data_offset": 0, 00:19:26.436 "data_size": 0 00:19:26.436 } 00:19:26.436 ] 00:19:26.436 }' 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.436 14:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.004 [2024-11-27 14:18:57.291038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:27.004 [2024-11-27 14:18:57.291675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:27.004 BaseBdev3 00:19:27.004 [2024-11-27 14:18:57.291879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:27.004 [2024-11-27 14:18:57.292350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.004 [2024-11-27 14:18:57.299225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:27.004 [2024-11-27 14:18:57.299259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:27.004 [2024-11-27 14:18:57.299719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.004 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.004 [ 00:19:27.004 { 00:19:27.004 "name": "BaseBdev3", 00:19:27.004 "aliases": [ 00:19:27.004 "a250cfb4-8fcc-4f3a-b665-fa5b154a07a3" 00:19:27.004 ], 00:19:27.004 "product_name": "Malloc disk", 00:19:27.004 "block_size": 512, 00:19:27.004 "num_blocks": 65536, 00:19:27.004 "uuid": "a250cfb4-8fcc-4f3a-b665-fa5b154a07a3", 00:19:27.004 "assigned_rate_limits": { 00:19:27.004 "rw_ios_per_sec": 0, 00:19:27.004 "rw_mbytes_per_sec": 0, 00:19:27.004 "r_mbytes_per_sec": 0, 00:19:27.004 "w_mbytes_per_sec": 0 00:19:27.004 }, 00:19:27.004 "claimed": true, 00:19:27.005 "claim_type": "exclusive_write", 00:19:27.005 "zoned": false, 00:19:27.005 "supported_io_types": { 00:19:27.005 "read": true, 00:19:27.005 "write": true, 00:19:27.005 "unmap": true, 00:19:27.005 "flush": true, 00:19:27.005 "reset": true, 00:19:27.005 "nvme_admin": false, 00:19:27.005 "nvme_io": false, 00:19:27.005 "nvme_io_md": false, 00:19:27.005 "write_zeroes": true, 00:19:27.005 "zcopy": true, 00:19:27.005 "get_zone_info": false, 00:19:27.005 "zone_management": false, 00:19:27.005 "zone_append": false, 00:19:27.005 "compare": false, 00:19:27.005 "compare_and_write": false, 00:19:27.005 "abort": true, 00:19:27.005 "seek_hole": false, 00:19:27.005 "seek_data": false, 00:19:27.005 "copy": true, 00:19:27.005 "nvme_iov_md": false 00:19:27.005 }, 00:19:27.005 "memory_domains": [ 00:19:27.005 { 00:19:27.005 "dma_device_id": "system", 00:19:27.005 "dma_device_type": 1 00:19:27.005 }, 00:19:27.005 { 00:19:27.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.005 "dma_device_type": 2 00:19:27.005 } 00:19:27.005 ], 00:19:27.005 "driver_specific": {} 00:19:27.005 } 00:19:27.005 ] 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.005 "name": "Existed_Raid", 00:19:27.005 "uuid": "5f06b150-5d5b-408c-90f0-24119dccb560", 00:19:27.005 "strip_size_kb": 64, 00:19:27.005 "state": "online", 00:19:27.005 "raid_level": "raid5f", 00:19:27.005 "superblock": true, 00:19:27.005 "num_base_bdevs": 3, 00:19:27.005 "num_base_bdevs_discovered": 3, 00:19:27.005 "num_base_bdevs_operational": 3, 00:19:27.005 "base_bdevs_list": [ 00:19:27.005 { 00:19:27.005 "name": "BaseBdev1", 00:19:27.005 "uuid": "a4ab224b-fda5-4af1-b29b-62acf1eb0715", 00:19:27.005 "is_configured": true, 00:19:27.005 "data_offset": 2048, 00:19:27.005 "data_size": 63488 00:19:27.005 }, 00:19:27.005 { 00:19:27.005 "name": "BaseBdev2", 00:19:27.005 "uuid": "55c6a945-f4f1-422e-b5f3-af3c29c0668d", 00:19:27.005 "is_configured": true, 00:19:27.005 "data_offset": 2048, 00:19:27.005 "data_size": 63488 00:19:27.005 }, 00:19:27.005 { 00:19:27.005 "name": "BaseBdev3", 00:19:27.005 "uuid": "a250cfb4-8fcc-4f3a-b665-fa5b154a07a3", 00:19:27.005 "is_configured": true, 00:19:27.005 "data_offset": 2048, 00:19:27.005 "data_size": 63488 00:19:27.005 } 00:19:27.005 ] 00:19:27.005 }' 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.005 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.573 [2024-11-27 14:18:57.879224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.573 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.573 "name": "Existed_Raid", 00:19:27.573 "aliases": [ 00:19:27.573 "5f06b150-5d5b-408c-90f0-24119dccb560" 00:19:27.573 ], 00:19:27.573 "product_name": "Raid Volume", 00:19:27.573 "block_size": 512, 00:19:27.573 "num_blocks": 126976, 00:19:27.573 "uuid": "5f06b150-5d5b-408c-90f0-24119dccb560", 00:19:27.573 "assigned_rate_limits": { 00:19:27.573 "rw_ios_per_sec": 0, 00:19:27.573 "rw_mbytes_per_sec": 0, 00:19:27.573 "r_mbytes_per_sec": 0, 00:19:27.573 "w_mbytes_per_sec": 0 00:19:27.573 }, 00:19:27.573 "claimed": false, 00:19:27.573 "zoned": false, 00:19:27.573 "supported_io_types": { 00:19:27.573 "read": true, 00:19:27.573 "write": true, 00:19:27.573 "unmap": false, 00:19:27.573 "flush": false, 00:19:27.573 "reset": true, 00:19:27.573 "nvme_admin": false, 00:19:27.573 "nvme_io": false, 00:19:27.573 "nvme_io_md": false, 00:19:27.573 "write_zeroes": true, 00:19:27.573 "zcopy": false, 00:19:27.573 "get_zone_info": false, 00:19:27.573 "zone_management": false, 00:19:27.573 "zone_append": false, 00:19:27.573 "compare": false, 00:19:27.573 "compare_and_write": false, 00:19:27.573 "abort": false, 00:19:27.573 "seek_hole": false, 00:19:27.573 "seek_data": false, 00:19:27.573 "copy": false, 00:19:27.573 "nvme_iov_md": false 00:19:27.573 }, 00:19:27.573 "driver_specific": { 00:19:27.573 "raid": { 00:19:27.573 "uuid": "5f06b150-5d5b-408c-90f0-24119dccb560", 00:19:27.573 "strip_size_kb": 64, 00:19:27.573 "state": "online", 00:19:27.573 "raid_level": "raid5f", 00:19:27.573 "superblock": true, 00:19:27.573 "num_base_bdevs": 3, 00:19:27.573 "num_base_bdevs_discovered": 3, 00:19:27.573 "num_base_bdevs_operational": 3, 00:19:27.573 "base_bdevs_list": [ 00:19:27.573 { 00:19:27.573 "name": "BaseBdev1", 00:19:27.573 "uuid": "a4ab224b-fda5-4af1-b29b-62acf1eb0715", 00:19:27.573 "is_configured": true, 00:19:27.573 "data_offset": 2048, 00:19:27.573 "data_size": 63488 00:19:27.573 }, 00:19:27.573 { 00:19:27.573 "name": "BaseBdev2", 00:19:27.573 "uuid": "55c6a945-f4f1-422e-b5f3-af3c29c0668d", 00:19:27.573 "is_configured": true, 00:19:27.573 "data_offset": 2048, 00:19:27.573 "data_size": 63488 00:19:27.573 }, 00:19:27.573 { 00:19:27.573 "name": "BaseBdev3", 00:19:27.573 "uuid": "a250cfb4-8fcc-4f3a-b665-fa5b154a07a3", 00:19:27.574 "is_configured": true, 00:19:27.574 "data_offset": 2048, 00:19:27.574 "data_size": 63488 00:19:27.574 } 00:19:27.574 ] 00:19:27.574 } 00:19:27.574 } 00:19:27.574 }' 00:19:27.574 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.574 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:27.574 BaseBdev2 00:19:27.574 BaseBdev3' 00:19:27.574 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.574 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.832 [2024-11-27 14:18:58.187220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:27.832 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.833 "name": "Existed_Raid", 00:19:27.833 "uuid": "5f06b150-5d5b-408c-90f0-24119dccb560", 00:19:27.833 "strip_size_kb": 64, 00:19:27.833 "state": "online", 00:19:27.833 "raid_level": "raid5f", 00:19:27.833 "superblock": true, 00:19:27.833 "num_base_bdevs": 3, 00:19:27.833 "num_base_bdevs_discovered": 2, 00:19:27.833 "num_base_bdevs_operational": 2, 00:19:27.833 "base_bdevs_list": [ 00:19:27.833 { 00:19:27.833 "name": null, 00:19:27.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.833 "is_configured": false, 00:19:27.833 "data_offset": 0, 00:19:27.833 "data_size": 63488 00:19:27.833 }, 00:19:27.833 { 00:19:27.833 "name": "BaseBdev2", 00:19:27.833 "uuid": "55c6a945-f4f1-422e-b5f3-af3c29c0668d", 00:19:27.833 "is_configured": true, 00:19:27.833 "data_offset": 2048, 00:19:27.833 "data_size": 63488 00:19:27.833 }, 00:19:27.833 { 00:19:27.833 "name": "BaseBdev3", 00:19:27.833 "uuid": "a250cfb4-8fcc-4f3a-b665-fa5b154a07a3", 00:19:27.833 "is_configured": true, 00:19:27.833 "data_offset": 2048, 00:19:27.833 "data_size": 63488 00:19:27.833 } 00:19:27.833 ] 00:19:27.833 }' 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.833 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.400 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.400 [2024-11-27 14:18:58.875429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:28.400 [2024-11-27 14:18:58.875863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.658 [2024-11-27 14:18:58.966691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.658 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.658 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:28.659 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.659 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.659 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.659 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.659 14:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:28.659 14:18:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.659 [2024-11-27 14:18:59.026734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:28.659 [2024-11-27 14:18:59.026985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.659 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 BaseBdev2 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 [ 00:19:28.927 { 00:19:28.927 "name": "BaseBdev2", 00:19:28.927 "aliases": [ 00:19:28.927 "799170bb-4b30-4bbb-af42-92938217555b" 00:19:28.927 ], 00:19:28.927 "product_name": "Malloc disk", 00:19:28.927 "block_size": 512, 00:19:28.927 "num_blocks": 65536, 00:19:28.927 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:28.927 "assigned_rate_limits": { 00:19:28.927 "rw_ios_per_sec": 0, 00:19:28.927 "rw_mbytes_per_sec": 0, 00:19:28.927 "r_mbytes_per_sec": 0, 00:19:28.927 "w_mbytes_per_sec": 0 00:19:28.927 }, 00:19:28.927 "claimed": false, 00:19:28.927 "zoned": false, 00:19:28.927 "supported_io_types": { 00:19:28.927 "read": true, 00:19:28.927 "write": true, 00:19:28.927 "unmap": true, 00:19:28.927 "flush": true, 00:19:28.927 "reset": true, 00:19:28.927 "nvme_admin": false, 00:19:28.927 "nvme_io": false, 00:19:28.927 "nvme_io_md": false, 00:19:28.927 "write_zeroes": true, 00:19:28.927 "zcopy": true, 00:19:28.927 "get_zone_info": false, 00:19:28.927 "zone_management": false, 00:19:28.927 "zone_append": false, 00:19:28.927 "compare": false, 00:19:28.927 "compare_and_write": false, 00:19:28.927 "abort": true, 00:19:28.927 "seek_hole": false, 00:19:28.927 "seek_data": false, 00:19:28.927 "copy": true, 00:19:28.927 "nvme_iov_md": false 00:19:28.927 }, 00:19:28.927 "memory_domains": [ 00:19:28.927 { 00:19:28.927 "dma_device_id": "system", 00:19:28.927 "dma_device_type": 1 00:19:28.927 }, 00:19:28.927 { 00:19:28.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.927 "dma_device_type": 2 00:19:28.927 } 00:19:28.927 ], 00:19:28.927 "driver_specific": {} 00:19:28.927 } 00:19:28.927 ] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 BaseBdev3 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 [ 00:19:28.927 { 00:19:28.927 "name": "BaseBdev3", 00:19:28.927 "aliases": [ 00:19:28.927 "29612715-b19e-44e1-8629-4f85a25e7e4d" 00:19:28.927 ], 00:19:28.927 "product_name": "Malloc disk", 00:19:28.927 "block_size": 512, 00:19:28.927 "num_blocks": 65536, 00:19:28.927 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:28.927 "assigned_rate_limits": { 00:19:28.927 "rw_ios_per_sec": 0, 00:19:28.927 "rw_mbytes_per_sec": 0, 00:19:28.927 "r_mbytes_per_sec": 0, 00:19:28.927 "w_mbytes_per_sec": 0 00:19:28.927 }, 00:19:28.927 "claimed": false, 00:19:28.927 "zoned": false, 00:19:28.927 "supported_io_types": { 00:19:28.927 "read": true, 00:19:28.927 "write": true, 00:19:28.927 "unmap": true, 00:19:28.927 "flush": true, 00:19:28.927 "reset": true, 00:19:28.927 "nvme_admin": false, 00:19:28.927 "nvme_io": false, 00:19:28.927 "nvme_io_md": false, 00:19:28.927 "write_zeroes": true, 00:19:28.927 "zcopy": true, 00:19:28.927 "get_zone_info": false, 00:19:28.927 "zone_management": false, 00:19:28.927 "zone_append": false, 00:19:28.927 "compare": false, 00:19:28.927 "compare_and_write": false, 00:19:28.927 "abort": true, 00:19:28.927 "seek_hole": false, 00:19:28.927 "seek_data": false, 00:19:28.927 "copy": true, 00:19:28.927 "nvme_iov_md": false 00:19:28.927 }, 00:19:28.927 "memory_domains": [ 00:19:28.927 { 00:19:28.927 "dma_device_id": "system", 00:19:28.927 "dma_device_type": 1 00:19:28.927 }, 00:19:28.927 { 00:19:28.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.927 "dma_device_type": 2 00:19:28.927 } 00:19:28.927 ], 00:19:28.927 "driver_specific": {} 00:19:28.927 } 00:19:28.927 ] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 [2024-11-27 14:18:59.335927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.927 [2024-11-27 14:18:59.336111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.927 [2024-11-27 14:18:59.336264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.927 [2024-11-27 14:18:59.338931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.927 "name": "Existed_Raid", 00:19:28.927 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:28.927 "strip_size_kb": 64, 00:19:28.927 "state": "configuring", 00:19:28.927 "raid_level": "raid5f", 00:19:28.927 "superblock": true, 00:19:28.927 "num_base_bdevs": 3, 00:19:28.927 "num_base_bdevs_discovered": 2, 00:19:28.927 "num_base_bdevs_operational": 3, 00:19:28.927 "base_bdevs_list": [ 00:19:28.927 { 00:19:28.927 "name": "BaseBdev1", 00:19:28.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.927 "is_configured": false, 00:19:28.927 "data_offset": 0, 00:19:28.927 "data_size": 0 00:19:28.927 }, 00:19:28.927 { 00:19:28.927 "name": "BaseBdev2", 00:19:28.927 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:28.927 "is_configured": true, 00:19:28.927 "data_offset": 2048, 00:19:28.927 "data_size": 63488 00:19:28.927 }, 00:19:28.927 { 00:19:28.927 "name": "BaseBdev3", 00:19:28.927 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:28.927 "is_configured": true, 00:19:28.927 "data_offset": 2048, 00:19:28.927 "data_size": 63488 00:19:28.927 } 00:19:28.927 ] 00:19:28.927 }' 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.927 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.494 [2024-11-27 14:18:59.868231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.494 "name": "Existed_Raid", 00:19:29.494 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:29.494 "strip_size_kb": 64, 00:19:29.494 "state": "configuring", 00:19:29.494 "raid_level": "raid5f", 00:19:29.494 "superblock": true, 00:19:29.494 "num_base_bdevs": 3, 00:19:29.494 "num_base_bdevs_discovered": 1, 00:19:29.494 "num_base_bdevs_operational": 3, 00:19:29.494 "base_bdevs_list": [ 00:19:29.494 { 00:19:29.494 "name": "BaseBdev1", 00:19:29.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.494 "is_configured": false, 00:19:29.494 "data_offset": 0, 00:19:29.494 "data_size": 0 00:19:29.494 }, 00:19:29.494 { 00:19:29.494 "name": null, 00:19:29.494 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:29.494 "is_configured": false, 00:19:29.494 "data_offset": 0, 00:19:29.494 "data_size": 63488 00:19:29.494 }, 00:19:29.494 { 00:19:29.494 "name": "BaseBdev3", 00:19:29.494 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:29.494 "is_configured": true, 00:19:29.494 "data_offset": 2048, 00:19:29.494 "data_size": 63488 00:19:29.494 } 00:19:29.494 ] 00:19:29.494 }' 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.494 14:18:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.062 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 [2024-11-27 14:19:00.493330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.063 BaseBdev1 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 [ 00:19:30.063 { 00:19:30.063 "name": "BaseBdev1", 00:19:30.063 "aliases": [ 00:19:30.063 "3d047f23-e151-4686-b450-ce7e55b14256" 00:19:30.063 ], 00:19:30.063 "product_name": "Malloc disk", 00:19:30.063 "block_size": 512, 00:19:30.063 "num_blocks": 65536, 00:19:30.063 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:30.063 "assigned_rate_limits": { 00:19:30.063 "rw_ios_per_sec": 0, 00:19:30.063 "rw_mbytes_per_sec": 0, 00:19:30.063 "r_mbytes_per_sec": 0, 00:19:30.063 "w_mbytes_per_sec": 0 00:19:30.063 }, 00:19:30.063 "claimed": true, 00:19:30.063 "claim_type": "exclusive_write", 00:19:30.063 "zoned": false, 00:19:30.063 "supported_io_types": { 00:19:30.063 "read": true, 00:19:30.063 "write": true, 00:19:30.063 "unmap": true, 00:19:30.063 "flush": true, 00:19:30.063 "reset": true, 00:19:30.063 "nvme_admin": false, 00:19:30.063 "nvme_io": false, 00:19:30.063 "nvme_io_md": false, 00:19:30.063 "write_zeroes": true, 00:19:30.063 "zcopy": true, 00:19:30.063 "get_zone_info": false, 00:19:30.063 "zone_management": false, 00:19:30.063 "zone_append": false, 00:19:30.063 "compare": false, 00:19:30.063 "compare_and_write": false, 00:19:30.063 "abort": true, 00:19:30.063 "seek_hole": false, 00:19:30.063 "seek_data": false, 00:19:30.063 "copy": true, 00:19:30.063 "nvme_iov_md": false 00:19:30.063 }, 00:19:30.063 "memory_domains": [ 00:19:30.063 { 00:19:30.063 "dma_device_id": "system", 00:19:30.063 "dma_device_type": 1 00:19:30.063 }, 00:19:30.063 { 00:19:30.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.063 "dma_device_type": 2 00:19:30.063 } 00:19:30.063 ], 00:19:30.063 "driver_specific": {} 00:19:30.063 } 00:19:30.063 ] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.063 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.321 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.321 "name": "Existed_Raid", 00:19:30.321 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:30.321 "strip_size_kb": 64, 00:19:30.321 "state": "configuring", 00:19:30.321 "raid_level": "raid5f", 00:19:30.321 "superblock": true, 00:19:30.321 "num_base_bdevs": 3, 00:19:30.321 "num_base_bdevs_discovered": 2, 00:19:30.321 "num_base_bdevs_operational": 3, 00:19:30.321 "base_bdevs_list": [ 00:19:30.321 { 00:19:30.321 "name": "BaseBdev1", 00:19:30.321 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:30.321 "is_configured": true, 00:19:30.321 "data_offset": 2048, 00:19:30.321 "data_size": 63488 00:19:30.321 }, 00:19:30.321 { 00:19:30.321 "name": null, 00:19:30.321 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:30.321 "is_configured": false, 00:19:30.321 "data_offset": 0, 00:19:30.321 "data_size": 63488 00:19:30.321 }, 00:19:30.321 { 00:19:30.321 "name": "BaseBdev3", 00:19:30.321 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:30.321 "is_configured": true, 00:19:30.321 "data_offset": 2048, 00:19:30.321 "data_size": 63488 00:19:30.321 } 00:19:30.321 ] 00:19:30.321 }' 00:19:30.321 14:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.321 14:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.579 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.579 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.579 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.579 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:30.579 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.837 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:30.837 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:30.837 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.837 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.837 [2024-11-27 14:19:01.105679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:30.837 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.837 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.838 "name": "Existed_Raid", 00:19:30.838 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:30.838 "strip_size_kb": 64, 00:19:30.838 "state": "configuring", 00:19:30.838 "raid_level": "raid5f", 00:19:30.838 "superblock": true, 00:19:30.838 "num_base_bdevs": 3, 00:19:30.838 "num_base_bdevs_discovered": 1, 00:19:30.838 "num_base_bdevs_operational": 3, 00:19:30.838 "base_bdevs_list": [ 00:19:30.838 { 00:19:30.838 "name": "BaseBdev1", 00:19:30.838 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:30.838 "is_configured": true, 00:19:30.838 "data_offset": 2048, 00:19:30.838 "data_size": 63488 00:19:30.838 }, 00:19:30.838 { 00:19:30.838 "name": null, 00:19:30.838 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:30.838 "is_configured": false, 00:19:30.838 "data_offset": 0, 00:19:30.838 "data_size": 63488 00:19:30.838 }, 00:19:30.838 { 00:19:30.838 "name": null, 00:19:30.838 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:30.838 "is_configured": false, 00:19:30.838 "data_offset": 0, 00:19:30.838 "data_size": 63488 00:19:30.838 } 00:19:30.838 ] 00:19:30.838 }' 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.838 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.407 [2024-11-27 14:19:01.697993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.407 "name": "Existed_Raid", 00:19:31.407 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:31.407 "strip_size_kb": 64, 00:19:31.407 "state": "configuring", 00:19:31.407 "raid_level": "raid5f", 00:19:31.407 "superblock": true, 00:19:31.407 "num_base_bdevs": 3, 00:19:31.407 "num_base_bdevs_discovered": 2, 00:19:31.407 "num_base_bdevs_operational": 3, 00:19:31.407 "base_bdevs_list": [ 00:19:31.407 { 00:19:31.407 "name": "BaseBdev1", 00:19:31.407 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:31.407 "is_configured": true, 00:19:31.407 "data_offset": 2048, 00:19:31.407 "data_size": 63488 00:19:31.407 }, 00:19:31.407 { 00:19:31.407 "name": null, 00:19:31.407 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:31.407 "is_configured": false, 00:19:31.407 "data_offset": 0, 00:19:31.407 "data_size": 63488 00:19:31.407 }, 00:19:31.407 { 00:19:31.407 "name": "BaseBdev3", 00:19:31.407 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:31.407 "is_configured": true, 00:19:31.407 "data_offset": 2048, 00:19:31.407 "data_size": 63488 00:19:31.407 } 00:19:31.407 ] 00:19:31.407 }' 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.407 14:19:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.975 [2024-11-27 14:19:02.278240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.975 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.975 "name": "Existed_Raid", 00:19:31.975 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:31.975 "strip_size_kb": 64, 00:19:31.975 "state": "configuring", 00:19:31.975 "raid_level": "raid5f", 00:19:31.975 "superblock": true, 00:19:31.975 "num_base_bdevs": 3, 00:19:31.975 "num_base_bdevs_discovered": 1, 00:19:31.975 "num_base_bdevs_operational": 3, 00:19:31.975 "base_bdevs_list": [ 00:19:31.975 { 00:19:31.975 "name": null, 00:19:31.975 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:31.975 "is_configured": false, 00:19:31.975 "data_offset": 0, 00:19:31.975 "data_size": 63488 00:19:31.975 }, 00:19:31.975 { 00:19:31.975 "name": null, 00:19:31.975 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:31.975 "is_configured": false, 00:19:31.975 "data_offset": 0, 00:19:31.975 "data_size": 63488 00:19:31.975 }, 00:19:31.975 { 00:19:31.976 "name": "BaseBdev3", 00:19:31.976 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:31.976 "is_configured": true, 00:19:31.976 "data_offset": 2048, 00:19:31.976 "data_size": 63488 00:19:31.976 } 00:19:31.976 ] 00:19:31.976 }' 00:19:31.976 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.976 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.543 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.543 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.543 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.543 14:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:32.543 14:19:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.543 [2024-11-27 14:19:03.010070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:32.543 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.544 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.802 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.802 "name": "Existed_Raid", 00:19:32.802 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:32.802 "strip_size_kb": 64, 00:19:32.802 "state": "configuring", 00:19:32.802 "raid_level": "raid5f", 00:19:32.802 "superblock": true, 00:19:32.802 "num_base_bdevs": 3, 00:19:32.802 "num_base_bdevs_discovered": 2, 00:19:32.802 "num_base_bdevs_operational": 3, 00:19:32.802 "base_bdevs_list": [ 00:19:32.802 { 00:19:32.802 "name": null, 00:19:32.802 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:32.802 "is_configured": false, 00:19:32.802 "data_offset": 0, 00:19:32.802 "data_size": 63488 00:19:32.802 }, 00:19:32.802 { 00:19:32.802 "name": "BaseBdev2", 00:19:32.802 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:32.802 "is_configured": true, 00:19:32.802 "data_offset": 2048, 00:19:32.802 "data_size": 63488 00:19:32.802 }, 00:19:32.802 { 00:19:32.802 "name": "BaseBdev3", 00:19:32.802 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:32.802 "is_configured": true, 00:19:32.802 "data_offset": 2048, 00:19:32.802 "data_size": 63488 00:19:32.802 } 00:19:32.802 ] 00:19:32.802 }' 00:19:32.802 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.802 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.060 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.060 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.060 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.060 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:33.060 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.318 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3d047f23-e151-4686-b450-ce7e55b14256 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 [2024-11-27 14:19:03.698996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:33.319 [2024-11-27 14:19:03.699367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:33.319 [2024-11-27 14:19:03.699393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:33.319 NewBaseBdev 00:19:33.319 [2024-11-27 14:19:03.699720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 [2024-11-27 14:19:03.704746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:33.319 [2024-11-27 14:19:03.704927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:33.319 [2024-11-27 14:19:03.705403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 [ 00:19:33.319 { 00:19:33.319 "name": "NewBaseBdev", 00:19:33.319 "aliases": [ 00:19:33.319 "3d047f23-e151-4686-b450-ce7e55b14256" 00:19:33.319 ], 00:19:33.319 "product_name": "Malloc disk", 00:19:33.319 "block_size": 512, 00:19:33.319 "num_blocks": 65536, 00:19:33.319 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:33.319 "assigned_rate_limits": { 00:19:33.319 "rw_ios_per_sec": 0, 00:19:33.319 "rw_mbytes_per_sec": 0, 00:19:33.319 "r_mbytes_per_sec": 0, 00:19:33.319 "w_mbytes_per_sec": 0 00:19:33.319 }, 00:19:33.319 "claimed": true, 00:19:33.319 "claim_type": "exclusive_write", 00:19:33.319 "zoned": false, 00:19:33.319 "supported_io_types": { 00:19:33.319 "read": true, 00:19:33.319 "write": true, 00:19:33.319 "unmap": true, 00:19:33.319 "flush": true, 00:19:33.319 "reset": true, 00:19:33.319 "nvme_admin": false, 00:19:33.319 "nvme_io": false, 00:19:33.319 "nvme_io_md": false, 00:19:33.319 "write_zeroes": true, 00:19:33.319 "zcopy": true, 00:19:33.319 "get_zone_info": false, 00:19:33.319 "zone_management": false, 00:19:33.319 "zone_append": false, 00:19:33.319 "compare": false, 00:19:33.319 "compare_and_write": false, 00:19:33.319 "abort": true, 00:19:33.319 "seek_hole": false, 00:19:33.319 "seek_data": false, 00:19:33.319 "copy": true, 00:19:33.319 "nvme_iov_md": false 00:19:33.319 }, 00:19:33.319 "memory_domains": [ 00:19:33.319 { 00:19:33.319 "dma_device_id": "system", 00:19:33.319 "dma_device_type": 1 00:19:33.319 }, 00:19:33.319 { 00:19:33.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.319 "dma_device_type": 2 00:19:33.319 } 00:19:33.319 ], 00:19:33.319 "driver_specific": {} 00:19:33.319 } 00:19:33.319 ] 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.319 "name": "Existed_Raid", 00:19:33.319 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:33.319 "strip_size_kb": 64, 00:19:33.319 "state": "online", 00:19:33.319 "raid_level": "raid5f", 00:19:33.319 "superblock": true, 00:19:33.319 "num_base_bdevs": 3, 00:19:33.319 "num_base_bdevs_discovered": 3, 00:19:33.319 "num_base_bdevs_operational": 3, 00:19:33.319 "base_bdevs_list": [ 00:19:33.319 { 00:19:33.319 "name": "NewBaseBdev", 00:19:33.319 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:33.319 "is_configured": true, 00:19:33.319 "data_offset": 2048, 00:19:33.319 "data_size": 63488 00:19:33.319 }, 00:19:33.319 { 00:19:33.319 "name": "BaseBdev2", 00:19:33.319 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:33.319 "is_configured": true, 00:19:33.319 "data_offset": 2048, 00:19:33.319 "data_size": 63488 00:19:33.319 }, 00:19:33.319 { 00:19:33.319 "name": "BaseBdev3", 00:19:33.319 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:33.319 "is_configured": true, 00:19:33.319 "data_offset": 2048, 00:19:33.319 "data_size": 63488 00:19:33.319 } 00:19:33.319 ] 00:19:33.319 }' 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.319 14:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.884 [2024-11-27 14:19:04.300095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.884 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.884 "name": "Existed_Raid", 00:19:33.884 "aliases": [ 00:19:33.884 "d189000e-d7d5-44c6-bbd7-121ba203e90c" 00:19:33.884 ], 00:19:33.884 "product_name": "Raid Volume", 00:19:33.884 "block_size": 512, 00:19:33.884 "num_blocks": 126976, 00:19:33.884 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:33.884 "assigned_rate_limits": { 00:19:33.884 "rw_ios_per_sec": 0, 00:19:33.884 "rw_mbytes_per_sec": 0, 00:19:33.884 "r_mbytes_per_sec": 0, 00:19:33.884 "w_mbytes_per_sec": 0 00:19:33.884 }, 00:19:33.884 "claimed": false, 00:19:33.884 "zoned": false, 00:19:33.884 "supported_io_types": { 00:19:33.884 "read": true, 00:19:33.884 "write": true, 00:19:33.884 "unmap": false, 00:19:33.884 "flush": false, 00:19:33.884 "reset": true, 00:19:33.884 "nvme_admin": false, 00:19:33.884 "nvme_io": false, 00:19:33.884 "nvme_io_md": false, 00:19:33.884 "write_zeroes": true, 00:19:33.884 "zcopy": false, 00:19:33.884 "get_zone_info": false, 00:19:33.884 "zone_management": false, 00:19:33.884 "zone_append": false, 00:19:33.884 "compare": false, 00:19:33.884 "compare_and_write": false, 00:19:33.884 "abort": false, 00:19:33.884 "seek_hole": false, 00:19:33.884 "seek_data": false, 00:19:33.884 "copy": false, 00:19:33.884 "nvme_iov_md": false 00:19:33.884 }, 00:19:33.884 "driver_specific": { 00:19:33.884 "raid": { 00:19:33.884 "uuid": "d189000e-d7d5-44c6-bbd7-121ba203e90c", 00:19:33.884 "strip_size_kb": 64, 00:19:33.884 "state": "online", 00:19:33.884 "raid_level": "raid5f", 00:19:33.884 "superblock": true, 00:19:33.884 "num_base_bdevs": 3, 00:19:33.884 "num_base_bdevs_discovered": 3, 00:19:33.884 "num_base_bdevs_operational": 3, 00:19:33.884 "base_bdevs_list": [ 00:19:33.884 { 00:19:33.884 "name": "NewBaseBdev", 00:19:33.884 "uuid": "3d047f23-e151-4686-b450-ce7e55b14256", 00:19:33.884 "is_configured": true, 00:19:33.884 "data_offset": 2048, 00:19:33.884 "data_size": 63488 00:19:33.884 }, 00:19:33.884 { 00:19:33.884 "name": "BaseBdev2", 00:19:33.884 "uuid": "799170bb-4b30-4bbb-af42-92938217555b", 00:19:33.884 "is_configured": true, 00:19:33.884 "data_offset": 2048, 00:19:33.884 "data_size": 63488 00:19:33.884 }, 00:19:33.884 { 00:19:33.884 "name": "BaseBdev3", 00:19:33.884 "uuid": "29612715-b19e-44e1-8629-4f85a25e7e4d", 00:19:33.885 "is_configured": true, 00:19:33.885 "data_offset": 2048, 00:19:33.885 "data_size": 63488 00:19:33.885 } 00:19:33.885 ] 00:19:33.885 } 00:19:33.885 } 00:19:33.885 }' 00:19:33.885 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:34.143 BaseBdev2 00:19:34.143 BaseBdev3' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.143 [2024-11-27 14:19:04.619919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:34.143 [2024-11-27 14:19:04.619956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.143 [2024-11-27 14:19:04.620061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.143 [2024-11-27 14:19:04.620423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.143 [2024-11-27 14:19:04.620449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.143 14:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81088 00:19:34.144 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81088 ']' 00:19:34.144 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81088 00:19:34.144 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:34.144 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.144 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81088 00:19:34.416 killing process with pid 81088 00:19:34.416 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.416 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.416 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81088' 00:19:34.416 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81088 00:19:34.416 [2024-11-27 14:19:04.658268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.416 14:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81088 00:19:34.684 [2024-11-27 14:19:04.940907] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:35.652 14:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:35.652 ************************************ 00:19:35.652 END TEST raid5f_state_function_test_sb 00:19:35.652 ************************************ 00:19:35.652 00:19:35.652 real 0m12.153s 00:19:35.652 user 0m19.997s 00:19:35.652 sys 0m1.781s 00:19:35.652 14:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.652 14:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.652 14:19:06 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:19:35.652 14:19:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:35.652 14:19:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.652 14:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.652 ************************************ 00:19:35.652 START TEST raid5f_superblock_test 00:19:35.652 ************************************ 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81727 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:35.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81727 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81727 ']' 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.652 14:19:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.911 [2024-11-27 14:19:06.240629] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:19:35.911 [2024-11-27 14:19:06.240867] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81727 ] 00:19:36.169 [2024-11-27 14:19:06.429407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.169 [2024-11-27 14:19:06.596698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.427 [2024-11-27 14:19:06.839611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.427 [2024-11-27 14:19:06.839689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 malloc1 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 [2024-11-27 14:19:07.352175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.994 [2024-11-27 14:19:07.352277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.994 [2024-11-27 14:19:07.352310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:36.994 [2024-11-27 14:19:07.352325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.994 [2024-11-27 14:19:07.355461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.994 [2024-11-27 14:19:07.355701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.994 pt1 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 malloc2 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 [2024-11-27 14:19:07.411420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.994 [2024-11-27 14:19:07.411682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.994 [2024-11-27 14:19:07.411731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:36.994 [2024-11-27 14:19:07.411746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.994 [2024-11-27 14:19:07.414633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.994 [2024-11-27 14:19:07.414677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.994 pt2 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 malloc3 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 [2024-11-27 14:19:07.479522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:36.994 [2024-11-27 14:19:07.479605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.994 [2024-11-27 14:19:07.479639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:36.994 [2024-11-27 14:19:07.479655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.994 [2024-11-27 14:19:07.482688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.994 [2024-11-27 14:19:07.482734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:36.994 pt3 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.994 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.994 [2024-11-27 14:19:07.487664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.994 [2024-11-27 14:19:07.490427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.994 [2024-11-27 14:19:07.490727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:36.994 [2024-11-27 14:19:07.491058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:36.994 [2024-11-27 14:19:07.491246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:36.995 [2024-11-27 14:19:07.491699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:36.995 [2024-11-27 14:19:07.497358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:36.995 [2024-11-27 14:19:07.497546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:36.995 [2024-11-27 14:19:07.497974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.995 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.253 "name": "raid_bdev1", 00:19:37.253 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:37.253 "strip_size_kb": 64, 00:19:37.253 "state": "online", 00:19:37.253 "raid_level": "raid5f", 00:19:37.253 "superblock": true, 00:19:37.253 "num_base_bdevs": 3, 00:19:37.253 "num_base_bdevs_discovered": 3, 00:19:37.253 "num_base_bdevs_operational": 3, 00:19:37.253 "base_bdevs_list": [ 00:19:37.253 { 00:19:37.253 "name": "pt1", 00:19:37.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.253 "is_configured": true, 00:19:37.253 "data_offset": 2048, 00:19:37.253 "data_size": 63488 00:19:37.253 }, 00:19:37.253 { 00:19:37.253 "name": "pt2", 00:19:37.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.253 "is_configured": true, 00:19:37.253 "data_offset": 2048, 00:19:37.253 "data_size": 63488 00:19:37.253 }, 00:19:37.253 { 00:19:37.253 "name": "pt3", 00:19:37.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:37.253 "is_configured": true, 00:19:37.253 "data_offset": 2048, 00:19:37.253 "data_size": 63488 00:19:37.253 } 00:19:37.253 ] 00:19:37.253 }' 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.253 14:19:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:37.821 [2024-11-27 14:19:08.040569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.821 "name": "raid_bdev1", 00:19:37.821 "aliases": [ 00:19:37.821 "7c5c5599-4d42-4c25-9224-f2b07cad85be" 00:19:37.821 ], 00:19:37.821 "product_name": "Raid Volume", 00:19:37.821 "block_size": 512, 00:19:37.821 "num_blocks": 126976, 00:19:37.821 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:37.821 "assigned_rate_limits": { 00:19:37.821 "rw_ios_per_sec": 0, 00:19:37.821 "rw_mbytes_per_sec": 0, 00:19:37.821 "r_mbytes_per_sec": 0, 00:19:37.821 "w_mbytes_per_sec": 0 00:19:37.821 }, 00:19:37.821 "claimed": false, 00:19:37.821 "zoned": false, 00:19:37.821 "supported_io_types": { 00:19:37.821 "read": true, 00:19:37.821 "write": true, 00:19:37.821 "unmap": false, 00:19:37.821 "flush": false, 00:19:37.821 "reset": true, 00:19:37.821 "nvme_admin": false, 00:19:37.821 "nvme_io": false, 00:19:37.821 "nvme_io_md": false, 00:19:37.821 "write_zeroes": true, 00:19:37.821 "zcopy": false, 00:19:37.821 "get_zone_info": false, 00:19:37.821 "zone_management": false, 00:19:37.821 "zone_append": false, 00:19:37.821 "compare": false, 00:19:37.821 "compare_and_write": false, 00:19:37.821 "abort": false, 00:19:37.821 "seek_hole": false, 00:19:37.821 "seek_data": false, 00:19:37.821 "copy": false, 00:19:37.821 "nvme_iov_md": false 00:19:37.821 }, 00:19:37.821 "driver_specific": { 00:19:37.821 "raid": { 00:19:37.821 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:37.821 "strip_size_kb": 64, 00:19:37.821 "state": "online", 00:19:37.821 "raid_level": "raid5f", 00:19:37.821 "superblock": true, 00:19:37.821 "num_base_bdevs": 3, 00:19:37.821 "num_base_bdevs_discovered": 3, 00:19:37.821 "num_base_bdevs_operational": 3, 00:19:37.821 "base_bdevs_list": [ 00:19:37.821 { 00:19:37.821 "name": "pt1", 00:19:37.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.821 "is_configured": true, 00:19:37.821 "data_offset": 2048, 00:19:37.821 "data_size": 63488 00:19:37.821 }, 00:19:37.821 { 00:19:37.821 "name": "pt2", 00:19:37.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.821 "is_configured": true, 00:19:37.821 "data_offset": 2048, 00:19:37.821 "data_size": 63488 00:19:37.821 }, 00:19:37.821 { 00:19:37.821 "name": "pt3", 00:19:37.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:37.821 "is_configured": true, 00:19:37.821 "data_offset": 2048, 00:19:37.821 "data_size": 63488 00:19:37.821 } 00:19:37.821 ] 00:19:37.821 } 00:19:37.821 } 00:19:37.821 }' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:37.821 pt2 00:19:37.821 pt3' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.821 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:38.080 [2024-11-27 14:19:08.376624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7c5c5599-4d42-4c25-9224-f2b07cad85be 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7c5c5599-4d42-4c25-9224-f2b07cad85be ']' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 [2024-11-27 14:19:08.420394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.080 [2024-11-27 14:19:08.420430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.080 [2024-11-27 14:19:08.420525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.080 [2024-11-27 14:19:08.420619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.080 [2024-11-27 14:19:08.420635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:38.080 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.081 [2024-11-27 14:19:08.568554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:38.081 [2024-11-27 14:19:08.571375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:38.081 [2024-11-27 14:19:08.571590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:38.081 [2024-11-27 14:19:08.571791] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:38.081 [2024-11-27 14:19:08.572028] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:38.081 [2024-11-27 14:19:08.572069] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:38.081 [2024-11-27 14:19:08.572097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.081 [2024-11-27 14:19:08.572111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:38.081 request: 00:19:38.081 { 00:19:38.081 "name": "raid_bdev1", 00:19:38.081 "raid_level": "raid5f", 00:19:38.081 "base_bdevs": [ 00:19:38.081 "malloc1", 00:19:38.081 "malloc2", 00:19:38.081 "malloc3" 00:19:38.081 ], 00:19:38.081 "strip_size_kb": 64, 00:19:38.081 "superblock": false, 00:19:38.081 "method": "bdev_raid_create", 00:19:38.081 "req_id": 1 00:19:38.081 } 00:19:38.081 Got JSON-RPC error response 00:19:38.081 response: 00:19:38.081 { 00:19:38.081 "code": -17, 00:19:38.081 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:38.081 } 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.081 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.339 [2024-11-27 14:19:08.636761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.339 [2024-11-27 14:19:08.636853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.339 [2024-11-27 14:19:08.636898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:38.339 [2024-11-27 14:19:08.636913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.339 [2024-11-27 14:19:08.640118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.339 [2024-11-27 14:19:08.640164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.339 [2024-11-27 14:19:08.640307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:38.339 [2024-11-27 14:19:08.640391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.339 pt1 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.339 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.339 "name": "raid_bdev1", 00:19:38.339 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:38.339 "strip_size_kb": 64, 00:19:38.339 "state": "configuring", 00:19:38.339 "raid_level": "raid5f", 00:19:38.339 "superblock": true, 00:19:38.339 "num_base_bdevs": 3, 00:19:38.339 "num_base_bdevs_discovered": 1, 00:19:38.339 "num_base_bdevs_operational": 3, 00:19:38.339 "base_bdevs_list": [ 00:19:38.339 { 00:19:38.339 "name": "pt1", 00:19:38.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.339 "is_configured": true, 00:19:38.339 "data_offset": 2048, 00:19:38.339 "data_size": 63488 00:19:38.339 }, 00:19:38.339 { 00:19:38.339 "name": null, 00:19:38.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.339 "is_configured": false, 00:19:38.339 "data_offset": 2048, 00:19:38.339 "data_size": 63488 00:19:38.339 }, 00:19:38.339 { 00:19:38.339 "name": null, 00:19:38.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.340 "is_configured": false, 00:19:38.340 "data_offset": 2048, 00:19:38.340 "data_size": 63488 00:19:38.340 } 00:19:38.340 ] 00:19:38.340 }' 00:19:38.340 14:19:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.340 14:19:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.913 [2024-11-27 14:19:09.140940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.913 [2024-11-27 14:19:09.141019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.913 [2024-11-27 14:19:09.141053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:38.913 [2024-11-27 14:19:09.141069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.913 [2024-11-27 14:19:09.141665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.913 [2024-11-27 14:19:09.141703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.913 [2024-11-27 14:19:09.141856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:38.913 [2024-11-27 14:19:09.141912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.913 pt2 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.913 [2024-11-27 14:19:09.152924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.913 "name": "raid_bdev1", 00:19:38.913 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:38.913 "strip_size_kb": 64, 00:19:38.913 "state": "configuring", 00:19:38.913 "raid_level": "raid5f", 00:19:38.913 "superblock": true, 00:19:38.913 "num_base_bdevs": 3, 00:19:38.913 "num_base_bdevs_discovered": 1, 00:19:38.913 "num_base_bdevs_operational": 3, 00:19:38.913 "base_bdevs_list": [ 00:19:38.913 { 00:19:38.913 "name": "pt1", 00:19:38.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.913 "is_configured": true, 00:19:38.913 "data_offset": 2048, 00:19:38.913 "data_size": 63488 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "name": null, 00:19:38.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.913 "is_configured": false, 00:19:38.913 "data_offset": 0, 00:19:38.913 "data_size": 63488 00:19:38.913 }, 00:19:38.913 { 00:19:38.913 "name": null, 00:19:38.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.913 "is_configured": false, 00:19:38.913 "data_offset": 2048, 00:19:38.913 "data_size": 63488 00:19:38.913 } 00:19:38.913 ] 00:19:38.913 }' 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.913 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.171 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:39.171 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:39.171 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:39.171 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.171 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.171 [2024-11-27 14:19:09.661080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:39.171 [2024-11-27 14:19:09.661178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.171 [2024-11-27 14:19:09.661219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:39.171 [2024-11-27 14:19:09.661236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.171 [2024-11-27 14:19:09.661848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.172 [2024-11-27 14:19:09.661879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:39.172 [2024-11-27 14:19:09.662022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:39.172 [2024-11-27 14:19:09.662075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:39.172 pt2 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.172 [2024-11-27 14:19:09.669047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:39.172 [2024-11-27 14:19:09.669101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.172 [2024-11-27 14:19:09.669121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:39.172 [2024-11-27 14:19:09.669136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.172 [2024-11-27 14:19:09.669627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.172 [2024-11-27 14:19:09.669675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:39.172 [2024-11-27 14:19:09.669750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:39.172 [2024-11-27 14:19:09.669789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:39.172 [2024-11-27 14:19:09.669970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:39.172 [2024-11-27 14:19:09.670003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:39.172 [2024-11-27 14:19:09.670329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:39.172 [2024-11-27 14:19:09.675603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:39.172 [2024-11-27 14:19:09.675784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:39.172 [2024-11-27 14:19:09.676158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.172 pt3 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.172 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.430 "name": "raid_bdev1", 00:19:39.430 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:39.430 "strip_size_kb": 64, 00:19:39.430 "state": "online", 00:19:39.430 "raid_level": "raid5f", 00:19:39.430 "superblock": true, 00:19:39.430 "num_base_bdevs": 3, 00:19:39.430 "num_base_bdevs_discovered": 3, 00:19:39.430 "num_base_bdevs_operational": 3, 00:19:39.430 "base_bdevs_list": [ 00:19:39.430 { 00:19:39.430 "name": "pt1", 00:19:39.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.430 "is_configured": true, 00:19:39.430 "data_offset": 2048, 00:19:39.430 "data_size": 63488 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "name": "pt2", 00:19:39.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.430 "is_configured": true, 00:19:39.430 "data_offset": 2048, 00:19:39.430 "data_size": 63488 00:19:39.430 }, 00:19:39.430 { 00:19:39.430 "name": "pt3", 00:19:39.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:39.430 "is_configured": true, 00:19:39.430 "data_offset": 2048, 00:19:39.430 "data_size": 63488 00:19:39.430 } 00:19:39.430 ] 00:19:39.430 }' 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.430 14:19:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.004 [2024-11-27 14:19:10.238654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:40.004 "name": "raid_bdev1", 00:19:40.004 "aliases": [ 00:19:40.004 "7c5c5599-4d42-4c25-9224-f2b07cad85be" 00:19:40.004 ], 00:19:40.004 "product_name": "Raid Volume", 00:19:40.004 "block_size": 512, 00:19:40.004 "num_blocks": 126976, 00:19:40.004 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:40.004 "assigned_rate_limits": { 00:19:40.004 "rw_ios_per_sec": 0, 00:19:40.004 "rw_mbytes_per_sec": 0, 00:19:40.004 "r_mbytes_per_sec": 0, 00:19:40.004 "w_mbytes_per_sec": 0 00:19:40.004 }, 00:19:40.004 "claimed": false, 00:19:40.004 "zoned": false, 00:19:40.004 "supported_io_types": { 00:19:40.004 "read": true, 00:19:40.004 "write": true, 00:19:40.004 "unmap": false, 00:19:40.004 "flush": false, 00:19:40.004 "reset": true, 00:19:40.004 "nvme_admin": false, 00:19:40.004 "nvme_io": false, 00:19:40.004 "nvme_io_md": false, 00:19:40.004 "write_zeroes": true, 00:19:40.004 "zcopy": false, 00:19:40.004 "get_zone_info": false, 00:19:40.004 "zone_management": false, 00:19:40.004 "zone_append": false, 00:19:40.004 "compare": false, 00:19:40.004 "compare_and_write": false, 00:19:40.004 "abort": false, 00:19:40.004 "seek_hole": false, 00:19:40.004 "seek_data": false, 00:19:40.004 "copy": false, 00:19:40.004 "nvme_iov_md": false 00:19:40.004 }, 00:19:40.004 "driver_specific": { 00:19:40.004 "raid": { 00:19:40.004 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:40.004 "strip_size_kb": 64, 00:19:40.004 "state": "online", 00:19:40.004 "raid_level": "raid5f", 00:19:40.004 "superblock": true, 00:19:40.004 "num_base_bdevs": 3, 00:19:40.004 "num_base_bdevs_discovered": 3, 00:19:40.004 "num_base_bdevs_operational": 3, 00:19:40.004 "base_bdevs_list": [ 00:19:40.004 { 00:19:40.004 "name": "pt1", 00:19:40.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.004 "is_configured": true, 00:19:40.004 "data_offset": 2048, 00:19:40.004 "data_size": 63488 00:19:40.004 }, 00:19:40.004 { 00:19:40.004 "name": "pt2", 00:19:40.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.004 "is_configured": true, 00:19:40.004 "data_offset": 2048, 00:19:40.004 "data_size": 63488 00:19:40.004 }, 00:19:40.004 { 00:19:40.004 "name": "pt3", 00:19:40.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.004 "is_configured": true, 00:19:40.004 "data_offset": 2048, 00:19:40.004 "data_size": 63488 00:19:40.004 } 00:19:40.004 ] 00:19:40.004 } 00:19:40.004 } 00:19:40.004 }' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:40.004 pt2 00:19:40.004 pt3' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.004 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.263 [2024-11-27 14:19:10.582693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7c5c5599-4d42-4c25-9224-f2b07cad85be '!=' 7c5c5599-4d42-4c25-9224-f2b07cad85be ']' 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.263 [2024-11-27 14:19:10.634519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.263 "name": "raid_bdev1", 00:19:40.263 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:40.263 "strip_size_kb": 64, 00:19:40.263 "state": "online", 00:19:40.263 "raid_level": "raid5f", 00:19:40.263 "superblock": true, 00:19:40.263 "num_base_bdevs": 3, 00:19:40.263 "num_base_bdevs_discovered": 2, 00:19:40.263 "num_base_bdevs_operational": 2, 00:19:40.263 "base_bdevs_list": [ 00:19:40.263 { 00:19:40.263 "name": null, 00:19:40.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.263 "is_configured": false, 00:19:40.263 "data_offset": 0, 00:19:40.263 "data_size": 63488 00:19:40.263 }, 00:19:40.263 { 00:19:40.263 "name": "pt2", 00:19:40.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.263 "is_configured": true, 00:19:40.263 "data_offset": 2048, 00:19:40.263 "data_size": 63488 00:19:40.263 }, 00:19:40.263 { 00:19:40.263 "name": "pt3", 00:19:40.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.263 "is_configured": true, 00:19:40.263 "data_offset": 2048, 00:19:40.263 "data_size": 63488 00:19:40.263 } 00:19:40.263 ] 00:19:40.263 }' 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.263 14:19:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 [2024-11-27 14:19:11.170804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.832 [2024-11-27 14:19:11.170841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.832 [2024-11-27 14:19:11.170953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.832 [2024-11-27 14:19:11.171049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.832 [2024-11-27 14:19:11.171072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 [2024-11-27 14:19:11.262845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.832 [2024-11-27 14:19:11.262973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.832 [2024-11-27 14:19:11.263019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:40.832 [2024-11-27 14:19:11.263037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.832 [2024-11-27 14:19:11.266143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.832 [2024-11-27 14:19:11.266206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.832 [2024-11-27 14:19:11.266314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.832 [2024-11-27 14:19:11.266380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.832 pt2 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.832 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.832 "name": "raid_bdev1", 00:19:40.832 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:40.832 "strip_size_kb": 64, 00:19:40.832 "state": "configuring", 00:19:40.832 "raid_level": "raid5f", 00:19:40.832 "superblock": true, 00:19:40.832 "num_base_bdevs": 3, 00:19:40.832 "num_base_bdevs_discovered": 1, 00:19:40.832 "num_base_bdevs_operational": 2, 00:19:40.832 "base_bdevs_list": [ 00:19:40.833 { 00:19:40.833 "name": null, 00:19:40.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.833 "is_configured": false, 00:19:40.833 "data_offset": 2048, 00:19:40.833 "data_size": 63488 00:19:40.833 }, 00:19:40.833 { 00:19:40.833 "name": "pt2", 00:19:40.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.833 "is_configured": true, 00:19:40.833 "data_offset": 2048, 00:19:40.833 "data_size": 63488 00:19:40.833 }, 00:19:40.833 { 00:19:40.833 "name": null, 00:19:40.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.833 "is_configured": false, 00:19:40.833 "data_offset": 2048, 00:19:40.833 "data_size": 63488 00:19:40.833 } 00:19:40.833 ] 00:19:40.833 }' 00:19:40.833 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.833 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.399 [2024-11-27 14:19:11.795039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:41.399 [2024-11-27 14:19:11.795143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.399 [2024-11-27 14:19:11.795175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:41.399 [2024-11-27 14:19:11.795193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.399 [2024-11-27 14:19:11.795810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.399 [2024-11-27 14:19:11.795860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:41.399 [2024-11-27 14:19:11.795978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:41.399 [2024-11-27 14:19:11.796020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:41.399 [2024-11-27 14:19:11.796176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:41.399 [2024-11-27 14:19:11.796197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:41.399 [2024-11-27 14:19:11.796525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:41.399 [2024-11-27 14:19:11.801717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:41.399 [2024-11-27 14:19:11.801742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:41.399 [2024-11-27 14:19:11.802137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.399 pt3 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.399 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.400 "name": "raid_bdev1", 00:19:41.400 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:41.400 "strip_size_kb": 64, 00:19:41.400 "state": "online", 00:19:41.400 "raid_level": "raid5f", 00:19:41.400 "superblock": true, 00:19:41.400 "num_base_bdevs": 3, 00:19:41.400 "num_base_bdevs_discovered": 2, 00:19:41.400 "num_base_bdevs_operational": 2, 00:19:41.400 "base_bdevs_list": [ 00:19:41.400 { 00:19:41.400 "name": null, 00:19:41.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.400 "is_configured": false, 00:19:41.400 "data_offset": 2048, 00:19:41.400 "data_size": 63488 00:19:41.400 }, 00:19:41.400 { 00:19:41.400 "name": "pt2", 00:19:41.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.400 "is_configured": true, 00:19:41.400 "data_offset": 2048, 00:19:41.400 "data_size": 63488 00:19:41.400 }, 00:19:41.400 { 00:19:41.400 "name": "pt3", 00:19:41.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.400 "is_configured": true, 00:19:41.400 "data_offset": 2048, 00:19:41.400 "data_size": 63488 00:19:41.400 } 00:19:41.400 ] 00:19:41.400 }' 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.400 14:19:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.967 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.967 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.967 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.967 [2024-11-27 14:19:12.316077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.967 [2024-11-27 14:19:12.316252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.967 [2024-11-27 14:19:12.316368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.968 [2024-11-27 14:19:12.316469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.968 [2024-11-27 14:19:12.316501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.968 [2024-11-27 14:19:12.392117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:41.968 [2024-11-27 14:19:12.392411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.968 [2024-11-27 14:19:12.392450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:41.968 [2024-11-27 14:19:12.392465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.968 [2024-11-27 14:19:12.395572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.968 [2024-11-27 14:19:12.395774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:41.968 [2024-11-27 14:19:12.395940] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:41.968 [2024-11-27 14:19:12.396007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.968 [2024-11-27 14:19:12.396221] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:41.968 [2024-11-27 14:19:12.396240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.968 [2024-11-27 14:19:12.396279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:41.968 [2024-11-27 14:19:12.396355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.968 pt1 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.968 "name": "raid_bdev1", 00:19:41.968 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:41.968 "strip_size_kb": 64, 00:19:41.968 "state": "configuring", 00:19:41.968 "raid_level": "raid5f", 00:19:41.968 "superblock": true, 00:19:41.968 "num_base_bdevs": 3, 00:19:41.968 "num_base_bdevs_discovered": 1, 00:19:41.968 "num_base_bdevs_operational": 2, 00:19:41.968 "base_bdevs_list": [ 00:19:41.968 { 00:19:41.968 "name": null, 00:19:41.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.968 "is_configured": false, 00:19:41.968 "data_offset": 2048, 00:19:41.968 "data_size": 63488 00:19:41.968 }, 00:19:41.968 { 00:19:41.968 "name": "pt2", 00:19:41.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.968 "is_configured": true, 00:19:41.968 "data_offset": 2048, 00:19:41.968 "data_size": 63488 00:19:41.968 }, 00:19:41.968 { 00:19:41.968 "name": null, 00:19:41.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.968 "is_configured": false, 00:19:41.968 "data_offset": 2048, 00:19:41.968 "data_size": 63488 00:19:41.968 } 00:19:41.968 ] 00:19:41.968 }' 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.968 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.535 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.535 [2024-11-27 14:19:12.984652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.535 [2024-11-27 14:19:12.984747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.536 [2024-11-27 14:19:12.984778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:42.536 [2024-11-27 14:19:12.984793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.536 [2024-11-27 14:19:12.985456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.536 [2024-11-27 14:19:12.985498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.536 [2024-11-27 14:19:12.985638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:42.536 [2024-11-27 14:19:12.985670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.536 [2024-11-27 14:19:12.985855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:42.536 [2024-11-27 14:19:12.985886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:42.536 [2024-11-27 14:19:12.986204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:42.536 [2024-11-27 14:19:12.991307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:42.536 [2024-11-27 14:19:12.991351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:42.536 [2024-11-27 14:19:12.991705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.536 pt3 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.536 14:19:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.536 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.794 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.794 "name": "raid_bdev1", 00:19:42.794 "uuid": "7c5c5599-4d42-4c25-9224-f2b07cad85be", 00:19:42.794 "strip_size_kb": 64, 00:19:42.794 "state": "online", 00:19:42.794 "raid_level": "raid5f", 00:19:42.794 "superblock": true, 00:19:42.794 "num_base_bdevs": 3, 00:19:42.794 "num_base_bdevs_discovered": 2, 00:19:42.794 "num_base_bdevs_operational": 2, 00:19:42.794 "base_bdevs_list": [ 00:19:42.794 { 00:19:42.794 "name": null, 00:19:42.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.794 "is_configured": false, 00:19:42.794 "data_offset": 2048, 00:19:42.795 "data_size": 63488 00:19:42.795 }, 00:19:42.795 { 00:19:42.795 "name": "pt2", 00:19:42.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.795 "is_configured": true, 00:19:42.795 "data_offset": 2048, 00:19:42.795 "data_size": 63488 00:19:42.795 }, 00:19:42.795 { 00:19:42.795 "name": "pt3", 00:19:42.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:42.795 "is_configured": true, 00:19:42.795 "data_offset": 2048, 00:19:42.795 "data_size": 63488 00:19:42.795 } 00:19:42.795 ] 00:19:42.795 }' 00:19:42.795 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.795 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.053 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:43.053 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:43.053 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.053 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.053 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:43.320 [2024-11-27 14:19:13.585889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7c5c5599-4d42-4c25-9224-f2b07cad85be '!=' 7c5c5599-4d42-4c25-9224-f2b07cad85be ']' 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81727 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81727 ']' 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81727 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81727 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81727' 00:19:43.320 killing process with pid 81727 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81727 00:19:43.320 [2024-11-27 14:19:13.665515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:43.320 14:19:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81727 00:19:43.320 [2024-11-27 14:19:13.665644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.320 [2024-11-27 14:19:13.665726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.320 [2024-11-27 14:19:13.665745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:43.580 [2024-11-27 14:19:13.938744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.516 14:19:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:44.516 00:19:44.516 real 0m8.852s 00:19:44.516 user 0m14.465s 00:19:44.516 sys 0m1.318s 00:19:44.516 14:19:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.516 ************************************ 00:19:44.516 END TEST raid5f_superblock_test 00:19:44.516 ************************************ 00:19:44.516 14:19:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.776 14:19:15 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:44.776 14:19:15 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:19:44.776 14:19:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:44.776 14:19:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.776 14:19:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.776 ************************************ 00:19:44.776 START TEST raid5f_rebuild_test 00:19:44.776 ************************************ 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82180 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82180 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82180 ']' 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.776 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.776 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.776 Zero copy mechanism will not be used. 00:19:44.776 [2024-11-27 14:19:15.156933] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:19:44.776 [2024-11-27 14:19:15.157112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82180 ] 00:19:45.035 [2024-11-27 14:19:15.359270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.035 [2024-11-27 14:19:15.516947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.292 [2024-11-27 14:19:15.743718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.292 [2024-11-27 14:19:15.743788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.858 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.858 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 BaseBdev1_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 [2024-11-27 14:19:16.198501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.859 [2024-11-27 14:19:16.198801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.859 [2024-11-27 14:19:16.198903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:45.859 [2024-11-27 14:19:16.198947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.859 [2024-11-27 14:19:16.202972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.859 [2024-11-27 14:19:16.203045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.859 BaseBdev1 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 BaseBdev2_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 [2024-11-27 14:19:16.263234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:45.859 [2024-11-27 14:19:16.263351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.859 [2024-11-27 14:19:16.263404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:45.859 [2024-11-27 14:19:16.263440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.859 [2024-11-27 14:19:16.266657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.859 [2024-11-27 14:19:16.266710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.859 BaseBdev2 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 BaseBdev3_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 [2024-11-27 14:19:16.328411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:45.859 [2024-11-27 14:19:16.328496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.859 [2024-11-27 14:19:16.328534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:45.859 [2024-11-27 14:19:16.328557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.859 [2024-11-27 14:19:16.332077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.859 [2024-11-27 14:19:16.332139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:45.859 BaseBdev3 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.859 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.117 spare_malloc 00:19:46.117 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.118 spare_delay 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.118 [2024-11-27 14:19:16.394736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.118 [2024-11-27 14:19:16.394849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.118 [2024-11-27 14:19:16.394886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:46.118 [2024-11-27 14:19:16.394908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.118 [2024-11-27 14:19:16.398361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.118 [2024-11-27 14:19:16.398424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.118 spare 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.118 [2024-11-27 14:19:16.406892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.118 [2024-11-27 14:19:16.410178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.118 [2024-11-27 14:19:16.410460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.118 [2024-11-27 14:19:16.410766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:46.118 [2024-11-27 14:19:16.410953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:46.118 [2024-11-27 14:19:16.411397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:46.118 [2024-11-27 14:19:16.418700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:46.118 [2024-11-27 14:19:16.418741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:46.118 [2024-11-27 14:19:16.419087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.118 "name": "raid_bdev1", 00:19:46.118 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:46.118 "strip_size_kb": 64, 00:19:46.118 "state": "online", 00:19:46.118 "raid_level": "raid5f", 00:19:46.118 "superblock": false, 00:19:46.118 "num_base_bdevs": 3, 00:19:46.118 "num_base_bdevs_discovered": 3, 00:19:46.118 "num_base_bdevs_operational": 3, 00:19:46.118 "base_bdevs_list": [ 00:19:46.118 { 00:19:46.118 "name": "BaseBdev1", 00:19:46.118 "uuid": "6a297f4f-eac3-5e93-907f-ff4396938276", 00:19:46.118 "is_configured": true, 00:19:46.118 "data_offset": 0, 00:19:46.118 "data_size": 65536 00:19:46.118 }, 00:19:46.118 { 00:19:46.118 "name": "BaseBdev2", 00:19:46.118 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:46.118 "is_configured": true, 00:19:46.118 "data_offset": 0, 00:19:46.118 "data_size": 65536 00:19:46.118 }, 00:19:46.118 { 00:19:46.118 "name": "BaseBdev3", 00:19:46.118 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:46.118 "is_configured": true, 00:19:46.118 "data_offset": 0, 00:19:46.118 "data_size": 65536 00:19:46.118 } 00:19:46.118 ] 00:19:46.118 }' 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.118 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.685 [2024-11-27 14:19:16.922641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.685 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.685 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:46.943 [2024-11-27 14:19:17.354609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:46.943 /dev/nbd0 00:19:46.943 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:46.943 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:46.943 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:46.943 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:46.943 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.943 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.944 1+0 records in 00:19:46.944 1+0 records out 00:19:46.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446337 s, 9.2 MB/s 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:46.944 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:19:47.880 512+0 records in 00:19:47.880 512+0 records out 00:19:47.880 67108864 bytes (67 MB, 64 MiB) copied, 0.614386 s, 109 MB/s 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:47.880 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:48.138 [2024-11-27 14:19:18.405799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.138 [2024-11-27 14:19:18.420507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.138 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.138 "name": "raid_bdev1", 00:19:48.138 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:48.138 "strip_size_kb": 64, 00:19:48.138 "state": "online", 00:19:48.139 "raid_level": "raid5f", 00:19:48.139 "superblock": false, 00:19:48.139 "num_base_bdevs": 3, 00:19:48.139 "num_base_bdevs_discovered": 2, 00:19:48.139 "num_base_bdevs_operational": 2, 00:19:48.139 "base_bdevs_list": [ 00:19:48.139 { 00:19:48.139 "name": null, 00:19:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.139 "is_configured": false, 00:19:48.139 "data_offset": 0, 00:19:48.139 "data_size": 65536 00:19:48.139 }, 00:19:48.139 { 00:19:48.139 "name": "BaseBdev2", 00:19:48.139 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:48.139 "is_configured": true, 00:19:48.139 "data_offset": 0, 00:19:48.139 "data_size": 65536 00:19:48.139 }, 00:19:48.139 { 00:19:48.139 "name": "BaseBdev3", 00:19:48.139 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:48.139 "is_configured": true, 00:19:48.139 "data_offset": 0, 00:19:48.139 "data_size": 65536 00:19:48.139 } 00:19:48.139 ] 00:19:48.139 }' 00:19:48.139 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.139 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.705 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.705 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.705 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.705 [2024-11-27 14:19:18.956615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.705 [2024-11-27 14:19:18.973734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:19:48.705 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.705 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:48.705 [2024-11-27 14:19:18.981688] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.640 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.640 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.640 "name": "raid_bdev1", 00:19:49.640 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:49.640 "strip_size_kb": 64, 00:19:49.640 "state": "online", 00:19:49.640 "raid_level": "raid5f", 00:19:49.640 "superblock": false, 00:19:49.640 "num_base_bdevs": 3, 00:19:49.640 "num_base_bdevs_discovered": 3, 00:19:49.640 "num_base_bdevs_operational": 3, 00:19:49.640 "process": { 00:19:49.640 "type": "rebuild", 00:19:49.640 "target": "spare", 00:19:49.640 "progress": { 00:19:49.640 "blocks": 18432, 00:19:49.640 "percent": 14 00:19:49.640 } 00:19:49.640 }, 00:19:49.640 "base_bdevs_list": [ 00:19:49.640 { 00:19:49.640 "name": "spare", 00:19:49.640 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:49.640 "is_configured": true, 00:19:49.640 "data_offset": 0, 00:19:49.640 "data_size": 65536 00:19:49.640 }, 00:19:49.640 { 00:19:49.640 "name": "BaseBdev2", 00:19:49.640 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:49.640 "is_configured": true, 00:19:49.640 "data_offset": 0, 00:19:49.640 "data_size": 65536 00:19:49.640 }, 00:19:49.640 { 00:19:49.640 "name": "BaseBdev3", 00:19:49.640 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:49.640 "is_configured": true, 00:19:49.640 "data_offset": 0, 00:19:49.640 "data_size": 65536 00:19:49.640 } 00:19:49.640 ] 00:19:49.640 }' 00:19:49.640 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.640 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.640 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.899 [2024-11-27 14:19:20.155341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.899 [2024-11-27 14:19:20.199373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.899 [2024-11-27 14:19:20.199476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.899 [2024-11-27 14:19:20.199510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.899 [2024-11-27 14:19:20.199524] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.899 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.899 "name": "raid_bdev1", 00:19:49.899 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:49.899 "strip_size_kb": 64, 00:19:49.899 "state": "online", 00:19:49.899 "raid_level": "raid5f", 00:19:49.899 "superblock": false, 00:19:49.900 "num_base_bdevs": 3, 00:19:49.900 "num_base_bdevs_discovered": 2, 00:19:49.900 "num_base_bdevs_operational": 2, 00:19:49.900 "base_bdevs_list": [ 00:19:49.900 { 00:19:49.900 "name": null, 00:19:49.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.900 "is_configured": false, 00:19:49.900 "data_offset": 0, 00:19:49.900 "data_size": 65536 00:19:49.900 }, 00:19:49.900 { 00:19:49.900 "name": "BaseBdev2", 00:19:49.900 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:49.900 "is_configured": true, 00:19:49.900 "data_offset": 0, 00:19:49.900 "data_size": 65536 00:19:49.900 }, 00:19:49.900 { 00:19:49.900 "name": "BaseBdev3", 00:19:49.900 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:49.900 "is_configured": true, 00:19:49.900 "data_offset": 0, 00:19:49.900 "data_size": 65536 00:19:49.900 } 00:19:49.900 ] 00:19:49.900 }' 00:19:49.900 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.900 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.467 "name": "raid_bdev1", 00:19:50.467 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:50.467 "strip_size_kb": 64, 00:19:50.467 "state": "online", 00:19:50.467 "raid_level": "raid5f", 00:19:50.467 "superblock": false, 00:19:50.467 "num_base_bdevs": 3, 00:19:50.467 "num_base_bdevs_discovered": 2, 00:19:50.467 "num_base_bdevs_operational": 2, 00:19:50.467 "base_bdevs_list": [ 00:19:50.467 { 00:19:50.467 "name": null, 00:19:50.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.467 "is_configured": false, 00:19:50.467 "data_offset": 0, 00:19:50.467 "data_size": 65536 00:19:50.467 }, 00:19:50.467 { 00:19:50.467 "name": "BaseBdev2", 00:19:50.467 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:50.467 "is_configured": true, 00:19:50.467 "data_offset": 0, 00:19:50.467 "data_size": 65536 00:19:50.467 }, 00:19:50.467 { 00:19:50.467 "name": "BaseBdev3", 00:19:50.467 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:50.467 "is_configured": true, 00:19:50.467 "data_offset": 0, 00:19:50.467 "data_size": 65536 00:19:50.467 } 00:19:50.467 ] 00:19:50.467 }' 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.467 [2024-11-27 14:19:20.923532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.467 [2024-11-27 14:19:20.938790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.467 14:19:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:50.467 [2024-11-27 14:19:20.946442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.844 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.844 "name": "raid_bdev1", 00:19:51.844 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:51.844 "strip_size_kb": 64, 00:19:51.844 "state": "online", 00:19:51.844 "raid_level": "raid5f", 00:19:51.844 "superblock": false, 00:19:51.844 "num_base_bdevs": 3, 00:19:51.844 "num_base_bdevs_discovered": 3, 00:19:51.844 "num_base_bdevs_operational": 3, 00:19:51.844 "process": { 00:19:51.844 "type": "rebuild", 00:19:51.844 "target": "spare", 00:19:51.844 "progress": { 00:19:51.844 "blocks": 18432, 00:19:51.844 "percent": 14 00:19:51.844 } 00:19:51.844 }, 00:19:51.844 "base_bdevs_list": [ 00:19:51.844 { 00:19:51.844 "name": "spare", 00:19:51.844 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:51.844 "is_configured": true, 00:19:51.844 "data_offset": 0, 00:19:51.844 "data_size": 65536 00:19:51.844 }, 00:19:51.844 { 00:19:51.844 "name": "BaseBdev2", 00:19:51.844 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:51.844 "is_configured": true, 00:19:51.844 "data_offset": 0, 00:19:51.844 "data_size": 65536 00:19:51.844 }, 00:19:51.844 { 00:19:51.844 "name": "BaseBdev3", 00:19:51.844 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:51.844 "is_configured": true, 00:19:51.844 "data_offset": 0, 00:19:51.844 "data_size": 65536 00:19:51.844 } 00:19:51.844 ] 00:19:51.844 }' 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=604 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.844 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.845 "name": "raid_bdev1", 00:19:51.845 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:51.845 "strip_size_kb": 64, 00:19:51.845 "state": "online", 00:19:51.845 "raid_level": "raid5f", 00:19:51.845 "superblock": false, 00:19:51.845 "num_base_bdevs": 3, 00:19:51.845 "num_base_bdevs_discovered": 3, 00:19:51.845 "num_base_bdevs_operational": 3, 00:19:51.845 "process": { 00:19:51.845 "type": "rebuild", 00:19:51.845 "target": "spare", 00:19:51.845 "progress": { 00:19:51.845 "blocks": 22528, 00:19:51.845 "percent": 17 00:19:51.845 } 00:19:51.845 }, 00:19:51.845 "base_bdevs_list": [ 00:19:51.845 { 00:19:51.845 "name": "spare", 00:19:51.845 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:51.845 "is_configured": true, 00:19:51.845 "data_offset": 0, 00:19:51.845 "data_size": 65536 00:19:51.845 }, 00:19:51.845 { 00:19:51.845 "name": "BaseBdev2", 00:19:51.845 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:51.845 "is_configured": true, 00:19:51.845 "data_offset": 0, 00:19:51.845 "data_size": 65536 00:19:51.845 }, 00:19:51.845 { 00:19:51.845 "name": "BaseBdev3", 00:19:51.845 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:51.845 "is_configured": true, 00:19:51.845 "data_offset": 0, 00:19:51.845 "data_size": 65536 00:19:51.845 } 00:19:51.845 ] 00:19:51.845 }' 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.845 14:19:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.220 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.220 "name": "raid_bdev1", 00:19:53.221 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:53.221 "strip_size_kb": 64, 00:19:53.221 "state": "online", 00:19:53.221 "raid_level": "raid5f", 00:19:53.221 "superblock": false, 00:19:53.221 "num_base_bdevs": 3, 00:19:53.221 "num_base_bdevs_discovered": 3, 00:19:53.221 "num_base_bdevs_operational": 3, 00:19:53.221 "process": { 00:19:53.221 "type": "rebuild", 00:19:53.221 "target": "spare", 00:19:53.221 "progress": { 00:19:53.221 "blocks": 47104, 00:19:53.221 "percent": 35 00:19:53.221 } 00:19:53.221 }, 00:19:53.221 "base_bdevs_list": [ 00:19:53.221 { 00:19:53.221 "name": "spare", 00:19:53.221 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:53.221 "is_configured": true, 00:19:53.221 "data_offset": 0, 00:19:53.221 "data_size": 65536 00:19:53.221 }, 00:19:53.221 { 00:19:53.221 "name": "BaseBdev2", 00:19:53.221 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:53.221 "is_configured": true, 00:19:53.221 "data_offset": 0, 00:19:53.221 "data_size": 65536 00:19:53.221 }, 00:19:53.221 { 00:19:53.221 "name": "BaseBdev3", 00:19:53.221 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:53.221 "is_configured": true, 00:19:53.221 "data_offset": 0, 00:19:53.221 "data_size": 65536 00:19:53.221 } 00:19:53.221 ] 00:19:53.221 }' 00:19:53.221 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.221 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.221 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.221 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.221 14:19:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.156 "name": "raid_bdev1", 00:19:54.156 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:54.156 "strip_size_kb": 64, 00:19:54.156 "state": "online", 00:19:54.156 "raid_level": "raid5f", 00:19:54.156 "superblock": false, 00:19:54.156 "num_base_bdevs": 3, 00:19:54.156 "num_base_bdevs_discovered": 3, 00:19:54.156 "num_base_bdevs_operational": 3, 00:19:54.156 "process": { 00:19:54.156 "type": "rebuild", 00:19:54.156 "target": "spare", 00:19:54.156 "progress": { 00:19:54.156 "blocks": 69632, 00:19:54.156 "percent": 53 00:19:54.156 } 00:19:54.156 }, 00:19:54.156 "base_bdevs_list": [ 00:19:54.156 { 00:19:54.156 "name": "spare", 00:19:54.156 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:54.156 "is_configured": true, 00:19:54.156 "data_offset": 0, 00:19:54.156 "data_size": 65536 00:19:54.156 }, 00:19:54.156 { 00:19:54.156 "name": "BaseBdev2", 00:19:54.156 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:54.156 "is_configured": true, 00:19:54.156 "data_offset": 0, 00:19:54.156 "data_size": 65536 00:19:54.156 }, 00:19:54.156 { 00:19:54.156 "name": "BaseBdev3", 00:19:54.156 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:54.156 "is_configured": true, 00:19:54.156 "data_offset": 0, 00:19:54.156 "data_size": 65536 00:19:54.156 } 00:19:54.156 ] 00:19:54.156 }' 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.156 14:19:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.533 "name": "raid_bdev1", 00:19:55.533 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:55.533 "strip_size_kb": 64, 00:19:55.533 "state": "online", 00:19:55.533 "raid_level": "raid5f", 00:19:55.533 "superblock": false, 00:19:55.533 "num_base_bdevs": 3, 00:19:55.533 "num_base_bdevs_discovered": 3, 00:19:55.533 "num_base_bdevs_operational": 3, 00:19:55.533 "process": { 00:19:55.533 "type": "rebuild", 00:19:55.533 "target": "spare", 00:19:55.533 "progress": { 00:19:55.533 "blocks": 94208, 00:19:55.533 "percent": 71 00:19:55.533 } 00:19:55.533 }, 00:19:55.533 "base_bdevs_list": [ 00:19:55.533 { 00:19:55.533 "name": "spare", 00:19:55.533 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:55.533 "is_configured": true, 00:19:55.533 "data_offset": 0, 00:19:55.533 "data_size": 65536 00:19:55.533 }, 00:19:55.533 { 00:19:55.533 "name": "BaseBdev2", 00:19:55.533 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:55.533 "is_configured": true, 00:19:55.533 "data_offset": 0, 00:19:55.533 "data_size": 65536 00:19:55.533 }, 00:19:55.533 { 00:19:55.533 "name": "BaseBdev3", 00:19:55.533 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:55.533 "is_configured": true, 00:19:55.533 "data_offset": 0, 00:19:55.533 "data_size": 65536 00:19:55.533 } 00:19:55.533 ] 00:19:55.533 }' 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.533 14:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.527 "name": "raid_bdev1", 00:19:56.527 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:56.527 "strip_size_kb": 64, 00:19:56.527 "state": "online", 00:19:56.527 "raid_level": "raid5f", 00:19:56.527 "superblock": false, 00:19:56.527 "num_base_bdevs": 3, 00:19:56.527 "num_base_bdevs_discovered": 3, 00:19:56.527 "num_base_bdevs_operational": 3, 00:19:56.527 "process": { 00:19:56.527 "type": "rebuild", 00:19:56.527 "target": "spare", 00:19:56.527 "progress": { 00:19:56.527 "blocks": 116736, 00:19:56.527 "percent": 89 00:19:56.527 } 00:19:56.527 }, 00:19:56.527 "base_bdevs_list": [ 00:19:56.527 { 00:19:56.527 "name": "spare", 00:19:56.527 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:56.527 "is_configured": true, 00:19:56.527 "data_offset": 0, 00:19:56.527 "data_size": 65536 00:19:56.527 }, 00:19:56.527 { 00:19:56.527 "name": "BaseBdev2", 00:19:56.527 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:56.527 "is_configured": true, 00:19:56.527 "data_offset": 0, 00:19:56.527 "data_size": 65536 00:19:56.527 }, 00:19:56.527 { 00:19:56.527 "name": "BaseBdev3", 00:19:56.527 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:56.527 "is_configured": true, 00:19:56.527 "data_offset": 0, 00:19:56.527 "data_size": 65536 00:19:56.527 } 00:19:56.527 ] 00:19:56.527 }' 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.527 14:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:57.096 [2024-11-27 14:19:27.431834] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:57.096 [2024-11-27 14:19:27.432030] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:57.096 [2024-11-27 14:19:27.432117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.662 "name": "raid_bdev1", 00:19:57.662 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:57.662 "strip_size_kb": 64, 00:19:57.662 "state": "online", 00:19:57.662 "raid_level": "raid5f", 00:19:57.662 "superblock": false, 00:19:57.662 "num_base_bdevs": 3, 00:19:57.662 "num_base_bdevs_discovered": 3, 00:19:57.662 "num_base_bdevs_operational": 3, 00:19:57.662 "base_bdevs_list": [ 00:19:57.662 { 00:19:57.662 "name": "spare", 00:19:57.662 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:57.662 "is_configured": true, 00:19:57.662 "data_offset": 0, 00:19:57.662 "data_size": 65536 00:19:57.662 }, 00:19:57.662 { 00:19:57.662 "name": "BaseBdev2", 00:19:57.662 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:57.662 "is_configured": true, 00:19:57.662 "data_offset": 0, 00:19:57.662 "data_size": 65536 00:19:57.662 }, 00:19:57.662 { 00:19:57.662 "name": "BaseBdev3", 00:19:57.662 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:57.662 "is_configured": true, 00:19:57.662 "data_offset": 0, 00:19:57.662 "data_size": 65536 00:19:57.662 } 00:19:57.662 ] 00:19:57.662 }' 00:19:57.662 14:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.662 "name": "raid_bdev1", 00:19:57.662 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:57.662 "strip_size_kb": 64, 00:19:57.662 "state": "online", 00:19:57.662 "raid_level": "raid5f", 00:19:57.662 "superblock": false, 00:19:57.662 "num_base_bdevs": 3, 00:19:57.662 "num_base_bdevs_discovered": 3, 00:19:57.662 "num_base_bdevs_operational": 3, 00:19:57.662 "base_bdevs_list": [ 00:19:57.662 { 00:19:57.662 "name": "spare", 00:19:57.662 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:57.662 "is_configured": true, 00:19:57.662 "data_offset": 0, 00:19:57.662 "data_size": 65536 00:19:57.662 }, 00:19:57.662 { 00:19:57.662 "name": "BaseBdev2", 00:19:57.662 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:57.662 "is_configured": true, 00:19:57.662 "data_offset": 0, 00:19:57.662 "data_size": 65536 00:19:57.662 }, 00:19:57.662 { 00:19:57.662 "name": "BaseBdev3", 00:19:57.662 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:57.662 "is_configured": true, 00:19:57.662 "data_offset": 0, 00:19:57.662 "data_size": 65536 00:19:57.662 } 00:19:57.662 ] 00:19:57.662 }' 00:19:57.662 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.920 "name": "raid_bdev1", 00:19:57.920 "uuid": "8841904a-0d0f-47e2-bf64-8b60f1cd5b78", 00:19:57.920 "strip_size_kb": 64, 00:19:57.920 "state": "online", 00:19:57.920 "raid_level": "raid5f", 00:19:57.920 "superblock": false, 00:19:57.920 "num_base_bdevs": 3, 00:19:57.920 "num_base_bdevs_discovered": 3, 00:19:57.920 "num_base_bdevs_operational": 3, 00:19:57.920 "base_bdevs_list": [ 00:19:57.920 { 00:19:57.920 "name": "spare", 00:19:57.920 "uuid": "f29c733c-2094-5ddc-bbf1-50e5834ab318", 00:19:57.920 "is_configured": true, 00:19:57.920 "data_offset": 0, 00:19:57.920 "data_size": 65536 00:19:57.920 }, 00:19:57.920 { 00:19:57.920 "name": "BaseBdev2", 00:19:57.920 "uuid": "0857c763-5457-5d08-8c0f-ea346a9475c6", 00:19:57.920 "is_configured": true, 00:19:57.920 "data_offset": 0, 00:19:57.920 "data_size": 65536 00:19:57.920 }, 00:19:57.920 { 00:19:57.920 "name": "BaseBdev3", 00:19:57.920 "uuid": "695b45ce-4e26-5a35-892e-7d05e9c621df", 00:19:57.920 "is_configured": true, 00:19:57.920 "data_offset": 0, 00:19:57.920 "data_size": 65536 00:19:57.920 } 00:19:57.920 ] 00:19:57.920 }' 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.920 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.485 [2024-11-27 14:19:28.809033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.485 [2024-11-27 14:19:28.809073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.485 [2024-11-27 14:19:28.809189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.485 [2024-11-27 14:19:28.809298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.485 [2024-11-27 14:19:28.809335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.485 14:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:58.743 /dev/nbd0 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.743 1+0 records in 00:19:58.743 1+0 records out 00:19:58.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308865 s, 13.3 MB/s 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.743 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:59.310 /dev/nbd1 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:59.310 1+0 records in 00:19:59.310 1+0 records out 00:19:59.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045751 s, 9.0 MB/s 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.310 14:19:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:59.874 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.874 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.874 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.875 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82180 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82180 ']' 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82180 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82180 00:20:00.132 killing process with pid 82180 00:20:00.132 Received shutdown signal, test time was about 60.000000 seconds 00:20:00.132 00:20:00.132 Latency(us) 00:20:00.132 [2024-11-27T14:19:30.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.132 [2024-11-27T14:19:30.645Z] =================================================================================================================== 00:20:00.132 [2024-11-27T14:19:30.645Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.132 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82180' 00:20:00.133 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82180 00:20:00.133 14:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82180 00:20:00.133 [2024-11-27 14:19:30.420484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.391 [2024-11-27 14:19:30.779256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.325 14:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:01.325 00:20:01.325 real 0m16.793s 00:20:01.325 user 0m21.485s 00:20:01.325 sys 0m2.219s 00:20:01.325 14:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.325 14:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.325 ************************************ 00:20:01.325 END TEST raid5f_rebuild_test 00:20:01.325 ************************************ 00:20:01.582 14:19:31 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:20:01.582 14:19:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:01.582 14:19:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.582 14:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.582 ************************************ 00:20:01.582 START TEST raid5f_rebuild_test_sb 00:20:01.582 ************************************ 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82630 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82630 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82630 ']' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.582 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.582 [2024-11-27 14:19:32.010350] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:20:01.583 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:01.583 Zero copy mechanism will not be used. 00:20:01.583 [2024-11-27 14:19:32.010553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82630 ] 00:20:01.840 [2024-11-27 14:19:32.205010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.098 [2024-11-27 14:19:32.368572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.357 [2024-11-27 14:19:32.619010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.357 [2024-11-27 14:19:32.619096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.615 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.615 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:02.615 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.615 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:02.615 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.615 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.874 BaseBdev1_malloc 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.874 [2024-11-27 14:19:33.152077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.874 [2024-11-27 14:19:33.152208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.874 [2024-11-27 14:19:33.152261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:02.874 [2024-11-27 14:19:33.152295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.874 [2024-11-27 14:19:33.156364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.874 [2024-11-27 14:19:33.156438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.874 BaseBdev1 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.874 BaseBdev2_malloc 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.874 [2024-11-27 14:19:33.208888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:02.874 [2024-11-27 14:19:33.208967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.874 [2024-11-27 14:19:33.209011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:02.874 [2024-11-27 14:19:33.209029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.874 [2024-11-27 14:19:33.211880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.874 [2024-11-27 14:19:33.211929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.874 BaseBdev2 00:20:02.874 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 BaseBdev3_malloc 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 [2024-11-27 14:19:33.278990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:02.875 [2024-11-27 14:19:33.279060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.875 [2024-11-27 14:19:33.279093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:02.875 [2024-11-27 14:19:33.279117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.875 [2024-11-27 14:19:33.281864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.875 [2024-11-27 14:19:33.281913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:02.875 BaseBdev3 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 spare_malloc 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 spare_delay 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 [2024-11-27 14:19:33.339172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:02.875 [2024-11-27 14:19:33.339252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.875 [2024-11-27 14:19:33.339280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:02.875 [2024-11-27 14:19:33.339298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.875 [2024-11-27 14:19:33.342081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.875 [2024-11-27 14:19:33.342146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:02.875 spare 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 [2024-11-27 14:19:33.347271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.875 [2024-11-27 14:19:33.349688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.875 [2024-11-27 14:19:33.349788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:02.875 [2024-11-27 14:19:33.350062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:02.875 [2024-11-27 14:19:33.350092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:02.875 [2024-11-27 14:19:33.350435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:02.875 [2024-11-27 14:19:33.355612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:02.875 [2024-11-27 14:19:33.355651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:02.875 [2024-11-27 14:19:33.355894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.875 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.133 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.133 "name": "raid_bdev1", 00:20:03.133 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:03.133 "strip_size_kb": 64, 00:20:03.133 "state": "online", 00:20:03.133 "raid_level": "raid5f", 00:20:03.133 "superblock": true, 00:20:03.133 "num_base_bdevs": 3, 00:20:03.133 "num_base_bdevs_discovered": 3, 00:20:03.133 "num_base_bdevs_operational": 3, 00:20:03.133 "base_bdevs_list": [ 00:20:03.133 { 00:20:03.133 "name": "BaseBdev1", 00:20:03.133 "uuid": "72ead145-fdd9-5209-b6f3-7f141fdb76f9", 00:20:03.133 "is_configured": true, 00:20:03.133 "data_offset": 2048, 00:20:03.133 "data_size": 63488 00:20:03.133 }, 00:20:03.133 { 00:20:03.133 "name": "BaseBdev2", 00:20:03.133 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:03.133 "is_configured": true, 00:20:03.133 "data_offset": 2048, 00:20:03.133 "data_size": 63488 00:20:03.133 }, 00:20:03.133 { 00:20:03.133 "name": "BaseBdev3", 00:20:03.133 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:03.133 "is_configured": true, 00:20:03.133 "data_offset": 2048, 00:20:03.133 "data_size": 63488 00:20:03.133 } 00:20:03.133 ] 00:20:03.133 }' 00:20:03.133 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.133 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.390 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.390 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.390 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:03.390 [2024-11-27 14:19:33.874000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.390 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.649 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:03.908 [2024-11-27 14:19:34.273893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:03.908 /dev/nbd0 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.908 1+0 records in 00:20:03.908 1+0 records out 00:20:03.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258129 s, 15.9 MB/s 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:03.908 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:20:04.475 496+0 records in 00:20:04.475 496+0 records out 00:20:04.475 65011712 bytes (65 MB, 62 MiB) copied, 0.48062 s, 135 MB/s 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.475 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:04.734 [2024-11-27 14:19:35.107503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.734 [2024-11-27 14:19:35.137268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.734 "name": "raid_bdev1", 00:20:04.734 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:04.734 "strip_size_kb": 64, 00:20:04.734 "state": "online", 00:20:04.734 "raid_level": "raid5f", 00:20:04.734 "superblock": true, 00:20:04.734 "num_base_bdevs": 3, 00:20:04.734 "num_base_bdevs_discovered": 2, 00:20:04.734 "num_base_bdevs_operational": 2, 00:20:04.734 "base_bdevs_list": [ 00:20:04.734 { 00:20:04.734 "name": null, 00:20:04.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.734 "is_configured": false, 00:20:04.734 "data_offset": 0, 00:20:04.734 "data_size": 63488 00:20:04.734 }, 00:20:04.734 { 00:20:04.734 "name": "BaseBdev2", 00:20:04.734 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:04.734 "is_configured": true, 00:20:04.734 "data_offset": 2048, 00:20:04.734 "data_size": 63488 00:20:04.734 }, 00:20:04.734 { 00:20:04.734 "name": "BaseBdev3", 00:20:04.734 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:04.734 "is_configured": true, 00:20:04.734 "data_offset": 2048, 00:20:04.734 "data_size": 63488 00:20:04.734 } 00:20:04.734 ] 00:20:04.734 }' 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.734 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.323 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:05.323 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.323 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.323 [2024-11-27 14:19:35.689441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.323 [2024-11-27 14:19:35.705259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:20:05.323 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.323 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:05.323 [2024-11-27 14:19:35.712785] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.259 "name": "raid_bdev1", 00:20:06.259 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:06.259 "strip_size_kb": 64, 00:20:06.259 "state": "online", 00:20:06.259 "raid_level": "raid5f", 00:20:06.259 "superblock": true, 00:20:06.259 "num_base_bdevs": 3, 00:20:06.259 "num_base_bdevs_discovered": 3, 00:20:06.259 "num_base_bdevs_operational": 3, 00:20:06.259 "process": { 00:20:06.259 "type": "rebuild", 00:20:06.259 "target": "spare", 00:20:06.259 "progress": { 00:20:06.259 "blocks": 18432, 00:20:06.259 "percent": 14 00:20:06.259 } 00:20:06.259 }, 00:20:06.259 "base_bdevs_list": [ 00:20:06.259 { 00:20:06.259 "name": "spare", 00:20:06.259 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:06.259 "is_configured": true, 00:20:06.259 "data_offset": 2048, 00:20:06.259 "data_size": 63488 00:20:06.259 }, 00:20:06.259 { 00:20:06.259 "name": "BaseBdev2", 00:20:06.259 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:06.259 "is_configured": true, 00:20:06.259 "data_offset": 2048, 00:20:06.259 "data_size": 63488 00:20:06.259 }, 00:20:06.259 { 00:20:06.259 "name": "BaseBdev3", 00:20:06.259 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:06.259 "is_configured": true, 00:20:06.259 "data_offset": 2048, 00:20:06.259 "data_size": 63488 00:20:06.259 } 00:20:06.259 ] 00:20:06.259 }' 00:20:06.259 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.518 [2024-11-27 14:19:36.863116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:06.518 [2024-11-27 14:19:36.928673] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:06.518 [2024-11-27 14:19:36.928783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.518 [2024-11-27 14:19:36.928831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:06.518 [2024-11-27 14:19:36.928845] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.518 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.518 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.518 "name": "raid_bdev1", 00:20:06.518 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:06.518 "strip_size_kb": 64, 00:20:06.518 "state": "online", 00:20:06.518 "raid_level": "raid5f", 00:20:06.518 "superblock": true, 00:20:06.518 "num_base_bdevs": 3, 00:20:06.518 "num_base_bdevs_discovered": 2, 00:20:06.518 "num_base_bdevs_operational": 2, 00:20:06.518 "base_bdevs_list": [ 00:20:06.518 { 00:20:06.518 "name": null, 00:20:06.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.518 "is_configured": false, 00:20:06.518 "data_offset": 0, 00:20:06.518 "data_size": 63488 00:20:06.518 }, 00:20:06.518 { 00:20:06.518 "name": "BaseBdev2", 00:20:06.518 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:06.518 "is_configured": true, 00:20:06.518 "data_offset": 2048, 00:20:06.518 "data_size": 63488 00:20:06.518 }, 00:20:06.518 { 00:20:06.518 "name": "BaseBdev3", 00:20:06.518 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:06.518 "is_configured": true, 00:20:06.518 "data_offset": 2048, 00:20:06.518 "data_size": 63488 00:20:06.518 } 00:20:06.518 ] 00:20:06.518 }' 00:20:06.518 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.518 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.085 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.085 "name": "raid_bdev1", 00:20:07.085 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:07.085 "strip_size_kb": 64, 00:20:07.085 "state": "online", 00:20:07.085 "raid_level": "raid5f", 00:20:07.085 "superblock": true, 00:20:07.085 "num_base_bdevs": 3, 00:20:07.085 "num_base_bdevs_discovered": 2, 00:20:07.085 "num_base_bdevs_operational": 2, 00:20:07.085 "base_bdevs_list": [ 00:20:07.085 { 00:20:07.085 "name": null, 00:20:07.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.086 "is_configured": false, 00:20:07.086 "data_offset": 0, 00:20:07.086 "data_size": 63488 00:20:07.086 }, 00:20:07.086 { 00:20:07.086 "name": "BaseBdev2", 00:20:07.086 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:07.086 "is_configured": true, 00:20:07.086 "data_offset": 2048, 00:20:07.086 "data_size": 63488 00:20:07.086 }, 00:20:07.086 { 00:20:07.086 "name": "BaseBdev3", 00:20:07.086 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:07.086 "is_configured": true, 00:20:07.086 "data_offset": 2048, 00:20:07.086 "data_size": 63488 00:20:07.086 } 00:20:07.086 ] 00:20:07.086 }' 00:20:07.086 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.086 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:07.086 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.344 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:07.344 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:07.344 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.345 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.345 [2024-11-27 14:19:37.646291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.345 [2024-11-27 14:19:37.661284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:20:07.345 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.345 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:07.345 [2024-11-27 14:19:37.668684] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.280 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.281 "name": "raid_bdev1", 00:20:08.281 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:08.281 "strip_size_kb": 64, 00:20:08.281 "state": "online", 00:20:08.281 "raid_level": "raid5f", 00:20:08.281 "superblock": true, 00:20:08.281 "num_base_bdevs": 3, 00:20:08.281 "num_base_bdevs_discovered": 3, 00:20:08.281 "num_base_bdevs_operational": 3, 00:20:08.281 "process": { 00:20:08.281 "type": "rebuild", 00:20:08.281 "target": "spare", 00:20:08.281 "progress": { 00:20:08.281 "blocks": 18432, 00:20:08.281 "percent": 14 00:20:08.281 } 00:20:08.281 }, 00:20:08.281 "base_bdevs_list": [ 00:20:08.281 { 00:20:08.281 "name": "spare", 00:20:08.281 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:08.281 "is_configured": true, 00:20:08.281 "data_offset": 2048, 00:20:08.281 "data_size": 63488 00:20:08.281 }, 00:20:08.281 { 00:20:08.281 "name": "BaseBdev2", 00:20:08.281 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:08.281 "is_configured": true, 00:20:08.281 "data_offset": 2048, 00:20:08.281 "data_size": 63488 00:20:08.281 }, 00:20:08.281 { 00:20:08.281 "name": "BaseBdev3", 00:20:08.281 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:08.281 "is_configured": true, 00:20:08.281 "data_offset": 2048, 00:20:08.281 "data_size": 63488 00:20:08.281 } 00:20:08.281 ] 00:20:08.281 }' 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.281 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:08.541 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=620 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.541 "name": "raid_bdev1", 00:20:08.541 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:08.541 "strip_size_kb": 64, 00:20:08.541 "state": "online", 00:20:08.541 "raid_level": "raid5f", 00:20:08.541 "superblock": true, 00:20:08.541 "num_base_bdevs": 3, 00:20:08.541 "num_base_bdevs_discovered": 3, 00:20:08.541 "num_base_bdevs_operational": 3, 00:20:08.541 "process": { 00:20:08.541 "type": "rebuild", 00:20:08.541 "target": "spare", 00:20:08.541 "progress": { 00:20:08.541 "blocks": 22528, 00:20:08.541 "percent": 17 00:20:08.541 } 00:20:08.541 }, 00:20:08.541 "base_bdevs_list": [ 00:20:08.541 { 00:20:08.541 "name": "spare", 00:20:08.541 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:08.541 "is_configured": true, 00:20:08.541 "data_offset": 2048, 00:20:08.541 "data_size": 63488 00:20:08.541 }, 00:20:08.541 { 00:20:08.541 "name": "BaseBdev2", 00:20:08.541 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:08.541 "is_configured": true, 00:20:08.541 "data_offset": 2048, 00:20:08.541 "data_size": 63488 00:20:08.541 }, 00:20:08.541 { 00:20:08.541 "name": "BaseBdev3", 00:20:08.541 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:08.541 "is_configured": true, 00:20:08.541 "data_offset": 2048, 00:20:08.541 "data_size": 63488 00:20:08.541 } 00:20:08.541 ] 00:20:08.541 }' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.541 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.915 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.915 "name": "raid_bdev1", 00:20:09.915 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:09.915 "strip_size_kb": 64, 00:20:09.915 "state": "online", 00:20:09.915 "raid_level": "raid5f", 00:20:09.915 "superblock": true, 00:20:09.915 "num_base_bdevs": 3, 00:20:09.915 "num_base_bdevs_discovered": 3, 00:20:09.915 "num_base_bdevs_operational": 3, 00:20:09.915 "process": { 00:20:09.915 "type": "rebuild", 00:20:09.915 "target": "spare", 00:20:09.915 "progress": { 00:20:09.915 "blocks": 47104, 00:20:09.915 "percent": 37 00:20:09.915 } 00:20:09.915 }, 00:20:09.915 "base_bdevs_list": [ 00:20:09.915 { 00:20:09.915 "name": "spare", 00:20:09.915 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:09.915 "is_configured": true, 00:20:09.915 "data_offset": 2048, 00:20:09.915 "data_size": 63488 00:20:09.915 }, 00:20:09.915 { 00:20:09.915 "name": "BaseBdev2", 00:20:09.915 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:09.915 "is_configured": true, 00:20:09.915 "data_offset": 2048, 00:20:09.915 "data_size": 63488 00:20:09.915 }, 00:20:09.915 { 00:20:09.915 "name": "BaseBdev3", 00:20:09.915 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:09.915 "is_configured": true, 00:20:09.915 "data_offset": 2048, 00:20:09.915 "data_size": 63488 00:20:09.915 } 00:20:09.915 ] 00:20:09.915 }' 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.915 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.851 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.852 "name": "raid_bdev1", 00:20:10.852 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:10.852 "strip_size_kb": 64, 00:20:10.852 "state": "online", 00:20:10.852 "raid_level": "raid5f", 00:20:10.852 "superblock": true, 00:20:10.852 "num_base_bdevs": 3, 00:20:10.852 "num_base_bdevs_discovered": 3, 00:20:10.852 "num_base_bdevs_operational": 3, 00:20:10.852 "process": { 00:20:10.852 "type": "rebuild", 00:20:10.852 "target": "spare", 00:20:10.852 "progress": { 00:20:10.852 "blocks": 69632, 00:20:10.852 "percent": 54 00:20:10.852 } 00:20:10.852 }, 00:20:10.852 "base_bdevs_list": [ 00:20:10.852 { 00:20:10.852 "name": "spare", 00:20:10.852 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:10.852 "is_configured": true, 00:20:10.852 "data_offset": 2048, 00:20:10.852 "data_size": 63488 00:20:10.852 }, 00:20:10.852 { 00:20:10.852 "name": "BaseBdev2", 00:20:10.852 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:10.852 "is_configured": true, 00:20:10.852 "data_offset": 2048, 00:20:10.852 "data_size": 63488 00:20:10.852 }, 00:20:10.852 { 00:20:10.852 "name": "BaseBdev3", 00:20:10.852 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:10.852 "is_configured": true, 00:20:10.852 "data_offset": 2048, 00:20:10.852 "data_size": 63488 00:20:10.852 } 00:20:10.852 ] 00:20:10.852 }' 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.852 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.231 "name": "raid_bdev1", 00:20:12.231 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:12.231 "strip_size_kb": 64, 00:20:12.231 "state": "online", 00:20:12.231 "raid_level": "raid5f", 00:20:12.231 "superblock": true, 00:20:12.231 "num_base_bdevs": 3, 00:20:12.231 "num_base_bdevs_discovered": 3, 00:20:12.231 "num_base_bdevs_operational": 3, 00:20:12.231 "process": { 00:20:12.231 "type": "rebuild", 00:20:12.231 "target": "spare", 00:20:12.231 "progress": { 00:20:12.231 "blocks": 94208, 00:20:12.231 "percent": 74 00:20:12.231 } 00:20:12.231 }, 00:20:12.231 "base_bdevs_list": [ 00:20:12.231 { 00:20:12.231 "name": "spare", 00:20:12.231 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:12.231 "is_configured": true, 00:20:12.231 "data_offset": 2048, 00:20:12.231 "data_size": 63488 00:20:12.231 }, 00:20:12.231 { 00:20:12.231 "name": "BaseBdev2", 00:20:12.231 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:12.231 "is_configured": true, 00:20:12.231 "data_offset": 2048, 00:20:12.231 "data_size": 63488 00:20:12.231 }, 00:20:12.231 { 00:20:12.231 "name": "BaseBdev3", 00:20:12.231 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:12.231 "is_configured": true, 00:20:12.231 "data_offset": 2048, 00:20:12.231 "data_size": 63488 00:20:12.231 } 00:20:12.231 ] 00:20:12.231 }' 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.231 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.168 "name": "raid_bdev1", 00:20:13.168 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:13.168 "strip_size_kb": 64, 00:20:13.168 "state": "online", 00:20:13.168 "raid_level": "raid5f", 00:20:13.168 "superblock": true, 00:20:13.168 "num_base_bdevs": 3, 00:20:13.168 "num_base_bdevs_discovered": 3, 00:20:13.168 "num_base_bdevs_operational": 3, 00:20:13.168 "process": { 00:20:13.168 "type": "rebuild", 00:20:13.168 "target": "spare", 00:20:13.168 "progress": { 00:20:13.168 "blocks": 116736, 00:20:13.168 "percent": 91 00:20:13.168 } 00:20:13.168 }, 00:20:13.168 "base_bdevs_list": [ 00:20:13.168 { 00:20:13.168 "name": "spare", 00:20:13.168 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:13.168 "is_configured": true, 00:20:13.168 "data_offset": 2048, 00:20:13.168 "data_size": 63488 00:20:13.168 }, 00:20:13.168 { 00:20:13.168 "name": "BaseBdev2", 00:20:13.168 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:13.168 "is_configured": true, 00:20:13.168 "data_offset": 2048, 00:20:13.168 "data_size": 63488 00:20:13.168 }, 00:20:13.168 { 00:20:13.168 "name": "BaseBdev3", 00:20:13.168 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:13.168 "is_configured": true, 00:20:13.168 "data_offset": 2048, 00:20:13.168 "data_size": 63488 00:20:13.168 } 00:20:13.168 ] 00:20:13.168 }' 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.168 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.737 [2024-11-27 14:19:43.948892] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:13.737 [2024-11-27 14:19:43.949015] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:13.737 [2024-11-27 14:19:43.949190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.305 "name": "raid_bdev1", 00:20:14.305 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:14.305 "strip_size_kb": 64, 00:20:14.305 "state": "online", 00:20:14.305 "raid_level": "raid5f", 00:20:14.305 "superblock": true, 00:20:14.305 "num_base_bdevs": 3, 00:20:14.305 "num_base_bdevs_discovered": 3, 00:20:14.305 "num_base_bdevs_operational": 3, 00:20:14.305 "base_bdevs_list": [ 00:20:14.305 { 00:20:14.305 "name": "spare", 00:20:14.305 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:14.305 "is_configured": true, 00:20:14.305 "data_offset": 2048, 00:20:14.305 "data_size": 63488 00:20:14.305 }, 00:20:14.305 { 00:20:14.305 "name": "BaseBdev2", 00:20:14.305 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:14.305 "is_configured": true, 00:20:14.305 "data_offset": 2048, 00:20:14.305 "data_size": 63488 00:20:14.305 }, 00:20:14.305 { 00:20:14.305 "name": "BaseBdev3", 00:20:14.305 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:14.305 "is_configured": true, 00:20:14.305 "data_offset": 2048, 00:20:14.305 "data_size": 63488 00:20:14.305 } 00:20:14.305 ] 00:20:14.305 }' 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:14.305 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.563 "name": "raid_bdev1", 00:20:14.563 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:14.563 "strip_size_kb": 64, 00:20:14.563 "state": "online", 00:20:14.563 "raid_level": "raid5f", 00:20:14.563 "superblock": true, 00:20:14.563 "num_base_bdevs": 3, 00:20:14.563 "num_base_bdevs_discovered": 3, 00:20:14.563 "num_base_bdevs_operational": 3, 00:20:14.563 "base_bdevs_list": [ 00:20:14.563 { 00:20:14.563 "name": "spare", 00:20:14.563 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:14.563 "is_configured": true, 00:20:14.563 "data_offset": 2048, 00:20:14.563 "data_size": 63488 00:20:14.563 }, 00:20:14.563 { 00:20:14.563 "name": "BaseBdev2", 00:20:14.563 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:14.563 "is_configured": true, 00:20:14.563 "data_offset": 2048, 00:20:14.563 "data_size": 63488 00:20:14.563 }, 00:20:14.563 { 00:20:14.563 "name": "BaseBdev3", 00:20:14.563 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:14.563 "is_configured": true, 00:20:14.563 "data_offset": 2048, 00:20:14.563 "data_size": 63488 00:20:14.563 } 00:20:14.563 ] 00:20:14.563 }' 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.563 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.564 14:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.564 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.564 "name": "raid_bdev1", 00:20:14.564 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:14.564 "strip_size_kb": 64, 00:20:14.564 "state": "online", 00:20:14.564 "raid_level": "raid5f", 00:20:14.564 "superblock": true, 00:20:14.564 "num_base_bdevs": 3, 00:20:14.564 "num_base_bdevs_discovered": 3, 00:20:14.564 "num_base_bdevs_operational": 3, 00:20:14.564 "base_bdevs_list": [ 00:20:14.564 { 00:20:14.564 "name": "spare", 00:20:14.564 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:14.564 "is_configured": true, 00:20:14.564 "data_offset": 2048, 00:20:14.564 "data_size": 63488 00:20:14.564 }, 00:20:14.564 { 00:20:14.564 "name": "BaseBdev2", 00:20:14.564 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:14.564 "is_configured": true, 00:20:14.564 "data_offset": 2048, 00:20:14.564 "data_size": 63488 00:20:14.564 }, 00:20:14.564 { 00:20:14.564 "name": "BaseBdev3", 00:20:14.564 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:14.564 "is_configured": true, 00:20:14.564 "data_offset": 2048, 00:20:14.564 "data_size": 63488 00:20:14.564 } 00:20:14.564 ] 00:20:14.564 }' 00:20:14.564 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.564 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 [2024-11-27 14:19:45.508624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.130 [2024-11-27 14:19:45.508666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.130 [2024-11-27 14:19:45.508779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.130 [2024-11-27 14:19:45.508914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.130 [2024-11-27 14:19:45.508941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:15.130 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:15.388 /dev/nbd0 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:15.646 1+0 records in 00:20:15.646 1+0 records out 00:20:15.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364454 s, 11.2 MB/s 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:15.646 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:15.905 /dev/nbd1 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:15.905 1+0 records in 00:20:15.905 1+0 records out 00:20:15.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453254 s, 9.0 MB/s 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:15.905 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:16.163 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:16.163 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:16.163 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:16.164 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:16.164 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:16.164 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.164 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.421 14:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.679 [2024-11-27 14:19:47.132220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.679 [2024-11-27 14:19:47.132301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.679 [2024-11-27 14:19:47.132333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:16.679 [2024-11-27 14:19:47.132352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.679 [2024-11-27 14:19:47.135303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.679 [2024-11-27 14:19:47.135353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.679 [2024-11-27 14:19:47.135468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:16.679 [2024-11-27 14:19:47.135544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.679 [2024-11-27 14:19:47.135728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.679 [2024-11-27 14:19:47.135910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.679 spare 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.679 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.937 [2024-11-27 14:19:47.236059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:16.937 [2024-11-27 14:19:47.236113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:16.937 [2024-11-27 14:19:47.236523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:20:16.937 [2024-11-27 14:19:47.241445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:16.937 [2024-11-27 14:19:47.241475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:16.937 [2024-11-27 14:19:47.241747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.937 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.938 "name": "raid_bdev1", 00:20:16.938 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:16.938 "strip_size_kb": 64, 00:20:16.938 "state": "online", 00:20:16.938 "raid_level": "raid5f", 00:20:16.938 "superblock": true, 00:20:16.938 "num_base_bdevs": 3, 00:20:16.938 "num_base_bdevs_discovered": 3, 00:20:16.938 "num_base_bdevs_operational": 3, 00:20:16.938 "base_bdevs_list": [ 00:20:16.938 { 00:20:16.938 "name": "spare", 00:20:16.938 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:16.938 "is_configured": true, 00:20:16.938 "data_offset": 2048, 00:20:16.938 "data_size": 63488 00:20:16.938 }, 00:20:16.938 { 00:20:16.938 "name": "BaseBdev2", 00:20:16.938 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:16.938 "is_configured": true, 00:20:16.938 "data_offset": 2048, 00:20:16.938 "data_size": 63488 00:20:16.938 }, 00:20:16.938 { 00:20:16.938 "name": "BaseBdev3", 00:20:16.938 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:16.938 "is_configured": true, 00:20:16.938 "data_offset": 2048, 00:20:16.938 "data_size": 63488 00:20:16.938 } 00:20:16.938 ] 00:20:16.938 }' 00:20:16.938 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.938 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.547 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.547 "name": "raid_bdev1", 00:20:17.547 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:17.547 "strip_size_kb": 64, 00:20:17.547 "state": "online", 00:20:17.547 "raid_level": "raid5f", 00:20:17.547 "superblock": true, 00:20:17.547 "num_base_bdevs": 3, 00:20:17.547 "num_base_bdevs_discovered": 3, 00:20:17.547 "num_base_bdevs_operational": 3, 00:20:17.547 "base_bdevs_list": [ 00:20:17.547 { 00:20:17.547 "name": "spare", 00:20:17.547 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:17.547 "is_configured": true, 00:20:17.547 "data_offset": 2048, 00:20:17.547 "data_size": 63488 00:20:17.547 }, 00:20:17.547 { 00:20:17.547 "name": "BaseBdev2", 00:20:17.547 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:17.547 "is_configured": true, 00:20:17.547 "data_offset": 2048, 00:20:17.547 "data_size": 63488 00:20:17.548 }, 00:20:17.548 { 00:20:17.548 "name": "BaseBdev3", 00:20:17.548 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:17.548 "is_configured": true, 00:20:17.548 "data_offset": 2048, 00:20:17.548 "data_size": 63488 00:20:17.548 } 00:20:17.548 ] 00:20:17.548 }' 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.548 [2024-11-27 14:19:47.995556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.548 14:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.548 "name": "raid_bdev1", 00:20:17.548 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:17.548 "strip_size_kb": 64, 00:20:17.548 "state": "online", 00:20:17.548 "raid_level": "raid5f", 00:20:17.548 "superblock": true, 00:20:17.548 "num_base_bdevs": 3, 00:20:17.548 "num_base_bdevs_discovered": 2, 00:20:17.548 "num_base_bdevs_operational": 2, 00:20:17.548 "base_bdevs_list": [ 00:20:17.548 { 00:20:17.548 "name": null, 00:20:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.548 "is_configured": false, 00:20:17.548 "data_offset": 0, 00:20:17.548 "data_size": 63488 00:20:17.548 }, 00:20:17.548 { 00:20:17.548 "name": "BaseBdev2", 00:20:17.548 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:17.548 "is_configured": true, 00:20:17.548 "data_offset": 2048, 00:20:17.548 "data_size": 63488 00:20:17.548 }, 00:20:17.548 { 00:20:17.548 "name": "BaseBdev3", 00:20:17.548 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:17.548 "is_configured": true, 00:20:17.548 "data_offset": 2048, 00:20:17.548 "data_size": 63488 00:20:17.548 } 00:20:17.548 ] 00:20:17.548 }' 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.548 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.118 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.118 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.118 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.118 [2024-11-27 14:19:48.519698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.118 [2024-11-27 14:19:48.519963] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:18.118 [2024-11-27 14:19:48.519992] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:18.118 [2024-11-27 14:19:48.520042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.118 [2024-11-27 14:19:48.534263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:20:18.118 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.118 14:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:18.118 [2024-11-27 14:19:48.541603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.054 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.313 "name": "raid_bdev1", 00:20:19.313 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:19.313 "strip_size_kb": 64, 00:20:19.313 "state": "online", 00:20:19.313 "raid_level": "raid5f", 00:20:19.313 "superblock": true, 00:20:19.313 "num_base_bdevs": 3, 00:20:19.313 "num_base_bdevs_discovered": 3, 00:20:19.313 "num_base_bdevs_operational": 3, 00:20:19.313 "process": { 00:20:19.313 "type": "rebuild", 00:20:19.313 "target": "spare", 00:20:19.313 "progress": { 00:20:19.313 "blocks": 18432, 00:20:19.313 "percent": 14 00:20:19.313 } 00:20:19.313 }, 00:20:19.313 "base_bdevs_list": [ 00:20:19.313 { 00:20:19.313 "name": "spare", 00:20:19.313 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:19.313 "is_configured": true, 00:20:19.313 "data_offset": 2048, 00:20:19.313 "data_size": 63488 00:20:19.313 }, 00:20:19.313 { 00:20:19.313 "name": "BaseBdev2", 00:20:19.313 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:19.313 "is_configured": true, 00:20:19.313 "data_offset": 2048, 00:20:19.313 "data_size": 63488 00:20:19.313 }, 00:20:19.313 { 00:20:19.313 "name": "BaseBdev3", 00:20:19.313 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:19.313 "is_configured": true, 00:20:19.313 "data_offset": 2048, 00:20:19.313 "data_size": 63488 00:20:19.313 } 00:20:19.313 ] 00:20:19.313 }' 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 [2024-11-27 14:19:49.695718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.313 [2024-11-27 14:19:49.758064] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:19.313 [2024-11-27 14:19:49.758183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.313 [2024-11-27 14:19:49.758211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.313 [2024-11-27 14:19:49.758226] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.313 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.574 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.574 "name": "raid_bdev1", 00:20:19.574 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:19.574 "strip_size_kb": 64, 00:20:19.574 "state": "online", 00:20:19.574 "raid_level": "raid5f", 00:20:19.574 "superblock": true, 00:20:19.574 "num_base_bdevs": 3, 00:20:19.574 "num_base_bdevs_discovered": 2, 00:20:19.574 "num_base_bdevs_operational": 2, 00:20:19.574 "base_bdevs_list": [ 00:20:19.574 { 00:20:19.574 "name": null, 00:20:19.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.574 "is_configured": false, 00:20:19.574 "data_offset": 0, 00:20:19.574 "data_size": 63488 00:20:19.574 }, 00:20:19.574 { 00:20:19.574 "name": "BaseBdev2", 00:20:19.574 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:19.574 "is_configured": true, 00:20:19.574 "data_offset": 2048, 00:20:19.574 "data_size": 63488 00:20:19.574 }, 00:20:19.574 { 00:20:19.574 "name": "BaseBdev3", 00:20:19.574 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:19.574 "is_configured": true, 00:20:19.574 "data_offset": 2048, 00:20:19.574 "data_size": 63488 00:20:19.574 } 00:20:19.574 ] 00:20:19.574 }' 00:20:19.574 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.574 14:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.832 14:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:19.832 14:19:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.832 14:19:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.832 [2024-11-27 14:19:50.294099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:19.832 [2024-11-27 14:19:50.294192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.832 [2024-11-27 14:19:50.294224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:19.832 [2024-11-27 14:19:50.294245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.832 [2024-11-27 14:19:50.294891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.832 [2024-11-27 14:19:50.294938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:19.832 [2024-11-27 14:19:50.295063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:19.833 [2024-11-27 14:19:50.295091] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.833 [2024-11-27 14:19:50.295106] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:19.833 [2024-11-27 14:19:50.295139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.833 [2024-11-27 14:19:50.309365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:20:19.833 spare 00:20:19.833 14:19:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.833 14:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:19.833 [2024-11-27 14:19:50.316631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.209 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.209 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.209 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.209 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.209 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.210 "name": "raid_bdev1", 00:20:21.210 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:21.210 "strip_size_kb": 64, 00:20:21.210 "state": "online", 00:20:21.210 "raid_level": "raid5f", 00:20:21.210 "superblock": true, 00:20:21.210 "num_base_bdevs": 3, 00:20:21.210 "num_base_bdevs_discovered": 3, 00:20:21.210 "num_base_bdevs_operational": 3, 00:20:21.210 "process": { 00:20:21.210 "type": "rebuild", 00:20:21.210 "target": "spare", 00:20:21.210 "progress": { 00:20:21.210 "blocks": 18432, 00:20:21.210 "percent": 14 00:20:21.210 } 00:20:21.210 }, 00:20:21.210 "base_bdevs_list": [ 00:20:21.210 { 00:20:21.210 "name": "spare", 00:20:21.210 "uuid": "7cc6f45c-5810-562d-880d-80f18b864f3f", 00:20:21.210 "is_configured": true, 00:20:21.210 "data_offset": 2048, 00:20:21.210 "data_size": 63488 00:20:21.210 }, 00:20:21.210 { 00:20:21.210 "name": "BaseBdev2", 00:20:21.210 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:21.210 "is_configured": true, 00:20:21.210 "data_offset": 2048, 00:20:21.210 "data_size": 63488 00:20:21.210 }, 00:20:21.210 { 00:20:21.210 "name": "BaseBdev3", 00:20:21.210 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:21.210 "is_configured": true, 00:20:21.210 "data_offset": 2048, 00:20:21.210 "data_size": 63488 00:20:21.210 } 00:20:21.210 ] 00:20:21.210 }' 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.210 [2024-11-27 14:19:51.474969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.210 [2024-11-27 14:19:51.532125] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.210 [2024-11-27 14:19:51.532238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.210 [2024-11-27 14:19:51.532270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.210 [2024-11-27 14:19:51.532283] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.210 "name": "raid_bdev1", 00:20:21.210 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:21.210 "strip_size_kb": 64, 00:20:21.210 "state": "online", 00:20:21.210 "raid_level": "raid5f", 00:20:21.210 "superblock": true, 00:20:21.210 "num_base_bdevs": 3, 00:20:21.210 "num_base_bdevs_discovered": 2, 00:20:21.210 "num_base_bdevs_operational": 2, 00:20:21.210 "base_bdevs_list": [ 00:20:21.210 { 00:20:21.210 "name": null, 00:20:21.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.210 "is_configured": false, 00:20:21.210 "data_offset": 0, 00:20:21.210 "data_size": 63488 00:20:21.210 }, 00:20:21.210 { 00:20:21.210 "name": "BaseBdev2", 00:20:21.210 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:21.210 "is_configured": true, 00:20:21.210 "data_offset": 2048, 00:20:21.210 "data_size": 63488 00:20:21.210 }, 00:20:21.210 { 00:20:21.210 "name": "BaseBdev3", 00:20:21.210 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:21.210 "is_configured": true, 00:20:21.210 "data_offset": 2048, 00:20:21.210 "data_size": 63488 00:20:21.210 } 00:20:21.210 ] 00:20:21.210 }' 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.210 14:19:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.778 "name": "raid_bdev1", 00:20:21.778 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:21.778 "strip_size_kb": 64, 00:20:21.778 "state": "online", 00:20:21.778 "raid_level": "raid5f", 00:20:21.778 "superblock": true, 00:20:21.778 "num_base_bdevs": 3, 00:20:21.778 "num_base_bdevs_discovered": 2, 00:20:21.778 "num_base_bdevs_operational": 2, 00:20:21.778 "base_bdevs_list": [ 00:20:21.778 { 00:20:21.778 "name": null, 00:20:21.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.778 "is_configured": false, 00:20:21.778 "data_offset": 0, 00:20:21.778 "data_size": 63488 00:20:21.778 }, 00:20:21.778 { 00:20:21.778 "name": "BaseBdev2", 00:20:21.778 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:21.778 "is_configured": true, 00:20:21.778 "data_offset": 2048, 00:20:21.778 "data_size": 63488 00:20:21.778 }, 00:20:21.778 { 00:20:21.778 "name": "BaseBdev3", 00:20:21.778 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:21.778 "is_configured": true, 00:20:21.778 "data_offset": 2048, 00:20:21.778 "data_size": 63488 00:20:21.778 } 00:20:21.778 ] 00:20:21.778 }' 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.778 [2024-11-27 14:19:52.260514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:21.778 [2024-11-27 14:19:52.260595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.778 [2024-11-27 14:19:52.260634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:21.778 [2024-11-27 14:19:52.260650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.778 [2024-11-27 14:19:52.261287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.778 [2024-11-27 14:19:52.261330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:21.778 [2024-11-27 14:19:52.261450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:21.778 [2024-11-27 14:19:52.261476] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.778 [2024-11-27 14:19:52.261501] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:21.778 [2024-11-27 14:19:52.261514] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:21.778 BaseBdev1 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.778 14:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.153 "name": "raid_bdev1", 00:20:23.153 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:23.153 "strip_size_kb": 64, 00:20:23.153 "state": "online", 00:20:23.153 "raid_level": "raid5f", 00:20:23.153 "superblock": true, 00:20:23.153 "num_base_bdevs": 3, 00:20:23.153 "num_base_bdevs_discovered": 2, 00:20:23.153 "num_base_bdevs_operational": 2, 00:20:23.153 "base_bdevs_list": [ 00:20:23.153 { 00:20:23.153 "name": null, 00:20:23.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.153 "is_configured": false, 00:20:23.153 "data_offset": 0, 00:20:23.153 "data_size": 63488 00:20:23.153 }, 00:20:23.153 { 00:20:23.153 "name": "BaseBdev2", 00:20:23.153 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:23.153 "is_configured": true, 00:20:23.153 "data_offset": 2048, 00:20:23.153 "data_size": 63488 00:20:23.153 }, 00:20:23.153 { 00:20:23.153 "name": "BaseBdev3", 00:20:23.153 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:23.153 "is_configured": true, 00:20:23.153 "data_offset": 2048, 00:20:23.153 "data_size": 63488 00:20:23.153 } 00:20:23.153 ] 00:20:23.153 }' 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.153 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.412 "name": "raid_bdev1", 00:20:23.412 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:23.412 "strip_size_kb": 64, 00:20:23.412 "state": "online", 00:20:23.412 "raid_level": "raid5f", 00:20:23.412 "superblock": true, 00:20:23.412 "num_base_bdevs": 3, 00:20:23.412 "num_base_bdevs_discovered": 2, 00:20:23.412 "num_base_bdevs_operational": 2, 00:20:23.412 "base_bdevs_list": [ 00:20:23.412 { 00:20:23.412 "name": null, 00:20:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.412 "is_configured": false, 00:20:23.412 "data_offset": 0, 00:20:23.412 "data_size": 63488 00:20:23.412 }, 00:20:23.412 { 00:20:23.412 "name": "BaseBdev2", 00:20:23.412 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:23.412 "is_configured": true, 00:20:23.412 "data_offset": 2048, 00:20:23.412 "data_size": 63488 00:20:23.412 }, 00:20:23.412 { 00:20:23.412 "name": "BaseBdev3", 00:20:23.412 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:23.412 "is_configured": true, 00:20:23.412 "data_offset": 2048, 00:20:23.412 "data_size": 63488 00:20:23.412 } 00:20:23.412 ] 00:20:23.412 }' 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.412 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.671 [2024-11-27 14:19:53.981102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.671 [2024-11-27 14:19:53.981325] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:23.671 [2024-11-27 14:19:53.981351] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:23.671 request: 00:20:23.671 { 00:20:23.671 "base_bdev": "BaseBdev1", 00:20:23.671 "raid_bdev": "raid_bdev1", 00:20:23.671 "method": "bdev_raid_add_base_bdev", 00:20:23.671 "req_id": 1 00:20:23.671 } 00:20:23.671 Got JSON-RPC error response 00:20:23.671 response: 00:20:23.671 { 00:20:23.671 "code": -22, 00:20:23.671 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:23.671 } 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.671 14:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.606 14:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.606 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.606 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.606 "name": "raid_bdev1", 00:20:24.606 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:24.606 "strip_size_kb": 64, 00:20:24.606 "state": "online", 00:20:24.606 "raid_level": "raid5f", 00:20:24.606 "superblock": true, 00:20:24.606 "num_base_bdevs": 3, 00:20:24.606 "num_base_bdevs_discovered": 2, 00:20:24.606 "num_base_bdevs_operational": 2, 00:20:24.606 "base_bdevs_list": [ 00:20:24.606 { 00:20:24.606 "name": null, 00:20:24.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.606 "is_configured": false, 00:20:24.606 "data_offset": 0, 00:20:24.606 "data_size": 63488 00:20:24.606 }, 00:20:24.606 { 00:20:24.606 "name": "BaseBdev2", 00:20:24.606 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:24.606 "is_configured": true, 00:20:24.606 "data_offset": 2048, 00:20:24.606 "data_size": 63488 00:20:24.606 }, 00:20:24.606 { 00:20:24.606 "name": "BaseBdev3", 00:20:24.606 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:24.606 "is_configured": true, 00:20:24.606 "data_offset": 2048, 00:20:24.606 "data_size": 63488 00:20:24.606 } 00:20:24.606 ] 00:20:24.606 }' 00:20:24.606 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.606 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.172 "name": "raid_bdev1", 00:20:25.172 "uuid": "fc641fb7-a4fe-460e-bc69-a26c7ef638ca", 00:20:25.172 "strip_size_kb": 64, 00:20:25.172 "state": "online", 00:20:25.172 "raid_level": "raid5f", 00:20:25.172 "superblock": true, 00:20:25.172 "num_base_bdevs": 3, 00:20:25.172 "num_base_bdevs_discovered": 2, 00:20:25.172 "num_base_bdevs_operational": 2, 00:20:25.172 "base_bdevs_list": [ 00:20:25.172 { 00:20:25.172 "name": null, 00:20:25.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.172 "is_configured": false, 00:20:25.172 "data_offset": 0, 00:20:25.172 "data_size": 63488 00:20:25.172 }, 00:20:25.172 { 00:20:25.172 "name": "BaseBdev2", 00:20:25.172 "uuid": "d82395f2-e1a6-5fd2-9d69-4778921d64c9", 00:20:25.172 "is_configured": true, 00:20:25.172 "data_offset": 2048, 00:20:25.172 "data_size": 63488 00:20:25.172 }, 00:20:25.172 { 00:20:25.172 "name": "BaseBdev3", 00:20:25.172 "uuid": "3a0ec199-0dcf-53a8-afb7-900bd474835e", 00:20:25.172 "is_configured": true, 00:20:25.172 "data_offset": 2048, 00:20:25.172 "data_size": 63488 00:20:25.172 } 00:20:25.172 ] 00:20:25.172 }' 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82630 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82630 ']' 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82630 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.172 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82630 00:20:25.430 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.430 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.430 killing process with pid 82630 00:20:25.430 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82630' 00:20:25.430 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82630 00:20:25.430 Received shutdown signal, test time was about 60.000000 seconds 00:20:25.430 00:20:25.430 Latency(us) 00:20:25.430 [2024-11-27T14:19:55.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.430 [2024-11-27T14:19:55.943Z] =================================================================================================================== 00:20:25.430 [2024-11-27T14:19:55.943Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.430 [2024-11-27 14:19:55.696358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:25.430 14:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82630 00:20:25.430 [2024-11-27 14:19:55.696531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.430 [2024-11-27 14:19:55.696626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.430 [2024-11-27 14:19:55.696648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:25.746 [2024-11-27 14:19:56.057346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.706 14:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:26.706 00:20:26.706 real 0m25.206s 00:20:26.706 user 0m33.734s 00:20:26.706 sys 0m2.635s 00:20:26.706 14:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.706 14:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.706 ************************************ 00:20:26.706 END TEST raid5f_rebuild_test_sb 00:20:26.706 ************************************ 00:20:26.706 14:19:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:26.706 14:19:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:20:26.706 14:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:26.706 14:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.706 14:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.706 ************************************ 00:20:26.706 START TEST raid5f_state_function_test 00:20:26.706 ************************************ 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83398 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:26.706 Process raid pid: 83398 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83398' 00:20:26.706 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83398 00:20:26.707 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83398 ']' 00:20:26.707 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.707 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.707 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.707 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.707 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 [2024-11-27 14:19:57.278606] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:20:26.965 [2024-11-27 14:19:57.278799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.965 [2024-11-27 14:19:57.459026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.223 [2024-11-27 14:19:57.589183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.481 [2024-11-27 14:19:57.795363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.481 [2024-11-27 14:19:57.795424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.739 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.739 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:27.739 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:27.739 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.739 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.997 [2024-11-27 14:19:58.255877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.997 [2024-11-27 14:19:58.255944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.997 [2024-11-27 14:19:58.255974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.998 [2024-11-27 14:19:58.256001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.998 [2024-11-27 14:19:58.256012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.998 [2024-11-27 14:19:58.256027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.998 [2024-11-27 14:19:58.256037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:27.998 [2024-11-27 14:19:58.256052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.998 "name": "Existed_Raid", 00:20:27.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.998 "strip_size_kb": 64, 00:20:27.998 "state": "configuring", 00:20:27.998 "raid_level": "raid5f", 00:20:27.998 "superblock": false, 00:20:27.998 "num_base_bdevs": 4, 00:20:27.998 "num_base_bdevs_discovered": 0, 00:20:27.998 "num_base_bdevs_operational": 4, 00:20:27.998 "base_bdevs_list": [ 00:20:27.998 { 00:20:27.998 "name": "BaseBdev1", 00:20:27.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.998 "is_configured": false, 00:20:27.998 "data_offset": 0, 00:20:27.998 "data_size": 0 00:20:27.998 }, 00:20:27.998 { 00:20:27.998 "name": "BaseBdev2", 00:20:27.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.998 "is_configured": false, 00:20:27.998 "data_offset": 0, 00:20:27.998 "data_size": 0 00:20:27.998 }, 00:20:27.998 { 00:20:27.998 "name": "BaseBdev3", 00:20:27.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.998 "is_configured": false, 00:20:27.998 "data_offset": 0, 00:20:27.998 "data_size": 0 00:20:27.998 }, 00:20:27.998 { 00:20:27.998 "name": "BaseBdev4", 00:20:27.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.998 "is_configured": false, 00:20:27.998 "data_offset": 0, 00:20:27.998 "data_size": 0 00:20:27.998 } 00:20:27.998 ] 00:20:27.998 }' 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.998 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.565 [2024-11-27 14:19:58.779972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.565 [2024-11-27 14:19:58.780023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.565 [2024-11-27 14:19:58.787935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:28.565 [2024-11-27 14:19:58.787988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:28.565 [2024-11-27 14:19:58.788003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.565 [2024-11-27 14:19:58.788020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.565 [2024-11-27 14:19:58.788031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.565 [2024-11-27 14:19:58.788051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.565 [2024-11-27 14:19:58.788061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:28.565 [2024-11-27 14:19:58.788075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.565 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.566 [2024-11-27 14:19:58.833780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.566 BaseBdev1 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.566 [ 00:20:28.566 { 00:20:28.566 "name": "BaseBdev1", 00:20:28.566 "aliases": [ 00:20:28.566 "9843c4de-609c-4830-81ea-e5842b18b405" 00:20:28.566 ], 00:20:28.566 "product_name": "Malloc disk", 00:20:28.566 "block_size": 512, 00:20:28.566 "num_blocks": 65536, 00:20:28.566 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:28.566 "assigned_rate_limits": { 00:20:28.566 "rw_ios_per_sec": 0, 00:20:28.566 "rw_mbytes_per_sec": 0, 00:20:28.566 "r_mbytes_per_sec": 0, 00:20:28.566 "w_mbytes_per_sec": 0 00:20:28.566 }, 00:20:28.566 "claimed": true, 00:20:28.566 "claim_type": "exclusive_write", 00:20:28.566 "zoned": false, 00:20:28.566 "supported_io_types": { 00:20:28.566 "read": true, 00:20:28.566 "write": true, 00:20:28.566 "unmap": true, 00:20:28.566 "flush": true, 00:20:28.566 "reset": true, 00:20:28.566 "nvme_admin": false, 00:20:28.566 "nvme_io": false, 00:20:28.566 "nvme_io_md": false, 00:20:28.566 "write_zeroes": true, 00:20:28.566 "zcopy": true, 00:20:28.566 "get_zone_info": false, 00:20:28.566 "zone_management": false, 00:20:28.566 "zone_append": false, 00:20:28.566 "compare": false, 00:20:28.566 "compare_and_write": false, 00:20:28.566 "abort": true, 00:20:28.566 "seek_hole": false, 00:20:28.566 "seek_data": false, 00:20:28.566 "copy": true, 00:20:28.566 "nvme_iov_md": false 00:20:28.566 }, 00:20:28.566 "memory_domains": [ 00:20:28.566 { 00:20:28.566 "dma_device_id": "system", 00:20:28.566 "dma_device_type": 1 00:20:28.566 }, 00:20:28.566 { 00:20:28.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.566 "dma_device_type": 2 00:20:28.566 } 00:20:28.566 ], 00:20:28.566 "driver_specific": {} 00:20:28.566 } 00:20:28.566 ] 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.566 "name": "Existed_Raid", 00:20:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.566 "strip_size_kb": 64, 00:20:28.566 "state": "configuring", 00:20:28.566 "raid_level": "raid5f", 00:20:28.566 "superblock": false, 00:20:28.566 "num_base_bdevs": 4, 00:20:28.566 "num_base_bdevs_discovered": 1, 00:20:28.566 "num_base_bdevs_operational": 4, 00:20:28.566 "base_bdevs_list": [ 00:20:28.566 { 00:20:28.566 "name": "BaseBdev1", 00:20:28.566 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:28.566 "is_configured": true, 00:20:28.566 "data_offset": 0, 00:20:28.566 "data_size": 65536 00:20:28.566 }, 00:20:28.566 { 00:20:28.566 "name": "BaseBdev2", 00:20:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.566 "is_configured": false, 00:20:28.566 "data_offset": 0, 00:20:28.566 "data_size": 0 00:20:28.566 }, 00:20:28.566 { 00:20:28.566 "name": "BaseBdev3", 00:20:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.566 "is_configured": false, 00:20:28.566 "data_offset": 0, 00:20:28.566 "data_size": 0 00:20:28.566 }, 00:20:28.566 { 00:20:28.566 "name": "BaseBdev4", 00:20:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.566 "is_configured": false, 00:20:28.566 "data_offset": 0, 00:20:28.566 "data_size": 0 00:20:28.566 } 00:20:28.566 ] 00:20:28.566 }' 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.566 14:19:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 [2024-11-27 14:19:59.397965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.132 [2024-11-27 14:19:59.398028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 [2024-11-27 14:19:59.406036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.132 [2024-11-27 14:19:59.408555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.132 [2024-11-27 14:19:59.408613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.132 [2024-11-27 14:19:59.408630] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:29.132 [2024-11-27 14:19:59.408648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:29.132 [2024-11-27 14:19:59.408658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:29.132 [2024-11-27 14:19:59.408671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.132 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.132 "name": "Existed_Raid", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "strip_size_kb": 64, 00:20:29.132 "state": "configuring", 00:20:29.132 "raid_level": "raid5f", 00:20:29.132 "superblock": false, 00:20:29.132 "num_base_bdevs": 4, 00:20:29.132 "num_base_bdevs_discovered": 1, 00:20:29.132 "num_base_bdevs_operational": 4, 00:20:29.132 "base_bdevs_list": [ 00:20:29.132 { 00:20:29.132 "name": "BaseBdev1", 00:20:29.132 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:29.132 "is_configured": true, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 65536 00:20:29.132 }, 00:20:29.132 { 00:20:29.132 "name": "BaseBdev2", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "is_configured": false, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 0 00:20:29.132 }, 00:20:29.132 { 00:20:29.132 "name": "BaseBdev3", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "is_configured": false, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 0 00:20:29.132 }, 00:20:29.132 { 00:20:29.132 "name": "BaseBdev4", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "is_configured": false, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 0 00:20:29.132 } 00:20:29.132 ] 00:20:29.133 }' 00:20:29.133 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.133 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.700 [2024-11-27 14:19:59.950092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.700 BaseBdev2 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.700 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.700 [ 00:20:29.700 { 00:20:29.700 "name": "BaseBdev2", 00:20:29.700 "aliases": [ 00:20:29.700 "6f589f36-91a0-49ce-9696-6415ebe1aeaa" 00:20:29.700 ], 00:20:29.700 "product_name": "Malloc disk", 00:20:29.700 "block_size": 512, 00:20:29.700 "num_blocks": 65536, 00:20:29.700 "uuid": "6f589f36-91a0-49ce-9696-6415ebe1aeaa", 00:20:29.700 "assigned_rate_limits": { 00:20:29.700 "rw_ios_per_sec": 0, 00:20:29.700 "rw_mbytes_per_sec": 0, 00:20:29.700 "r_mbytes_per_sec": 0, 00:20:29.700 "w_mbytes_per_sec": 0 00:20:29.700 }, 00:20:29.700 "claimed": true, 00:20:29.700 "claim_type": "exclusive_write", 00:20:29.700 "zoned": false, 00:20:29.700 "supported_io_types": { 00:20:29.700 "read": true, 00:20:29.700 "write": true, 00:20:29.700 "unmap": true, 00:20:29.700 "flush": true, 00:20:29.700 "reset": true, 00:20:29.700 "nvme_admin": false, 00:20:29.700 "nvme_io": false, 00:20:29.700 "nvme_io_md": false, 00:20:29.700 "write_zeroes": true, 00:20:29.700 "zcopy": true, 00:20:29.700 "get_zone_info": false, 00:20:29.700 "zone_management": false, 00:20:29.700 "zone_append": false, 00:20:29.700 "compare": false, 00:20:29.700 "compare_and_write": false, 00:20:29.700 "abort": true, 00:20:29.700 "seek_hole": false, 00:20:29.701 "seek_data": false, 00:20:29.701 "copy": true, 00:20:29.701 "nvme_iov_md": false 00:20:29.701 }, 00:20:29.701 "memory_domains": [ 00:20:29.701 { 00:20:29.701 "dma_device_id": "system", 00:20:29.701 "dma_device_type": 1 00:20:29.701 }, 00:20:29.701 { 00:20:29.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.701 "dma_device_type": 2 00:20:29.701 } 00:20:29.701 ], 00:20:29.701 "driver_specific": {} 00:20:29.701 } 00:20:29.701 ] 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.701 14:19:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.701 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.701 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.701 "name": "Existed_Raid", 00:20:29.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.701 "strip_size_kb": 64, 00:20:29.701 "state": "configuring", 00:20:29.701 "raid_level": "raid5f", 00:20:29.701 "superblock": false, 00:20:29.701 "num_base_bdevs": 4, 00:20:29.701 "num_base_bdevs_discovered": 2, 00:20:29.701 "num_base_bdevs_operational": 4, 00:20:29.701 "base_bdevs_list": [ 00:20:29.701 { 00:20:29.701 "name": "BaseBdev1", 00:20:29.701 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:29.701 "is_configured": true, 00:20:29.701 "data_offset": 0, 00:20:29.701 "data_size": 65536 00:20:29.701 }, 00:20:29.701 { 00:20:29.701 "name": "BaseBdev2", 00:20:29.701 "uuid": "6f589f36-91a0-49ce-9696-6415ebe1aeaa", 00:20:29.701 "is_configured": true, 00:20:29.701 "data_offset": 0, 00:20:29.701 "data_size": 65536 00:20:29.701 }, 00:20:29.701 { 00:20:29.701 "name": "BaseBdev3", 00:20:29.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.701 "is_configured": false, 00:20:29.701 "data_offset": 0, 00:20:29.701 "data_size": 0 00:20:29.701 }, 00:20:29.701 { 00:20:29.701 "name": "BaseBdev4", 00:20:29.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.701 "is_configured": false, 00:20:29.701 "data_offset": 0, 00:20:29.701 "data_size": 0 00:20:29.701 } 00:20:29.701 ] 00:20:29.701 }' 00:20:29.701 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.701 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.268 [2024-11-27 14:20:00.564442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:30.268 BaseBdev3 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.268 [ 00:20:30.268 { 00:20:30.268 "name": "BaseBdev3", 00:20:30.268 "aliases": [ 00:20:30.268 "e6f5fcf8-57dc-4076-aa68-87f9bdebc6ba" 00:20:30.268 ], 00:20:30.268 "product_name": "Malloc disk", 00:20:30.268 "block_size": 512, 00:20:30.268 "num_blocks": 65536, 00:20:30.268 "uuid": "e6f5fcf8-57dc-4076-aa68-87f9bdebc6ba", 00:20:30.268 "assigned_rate_limits": { 00:20:30.268 "rw_ios_per_sec": 0, 00:20:30.268 "rw_mbytes_per_sec": 0, 00:20:30.268 "r_mbytes_per_sec": 0, 00:20:30.268 "w_mbytes_per_sec": 0 00:20:30.268 }, 00:20:30.268 "claimed": true, 00:20:30.268 "claim_type": "exclusive_write", 00:20:30.268 "zoned": false, 00:20:30.268 "supported_io_types": { 00:20:30.268 "read": true, 00:20:30.268 "write": true, 00:20:30.268 "unmap": true, 00:20:30.268 "flush": true, 00:20:30.268 "reset": true, 00:20:30.268 "nvme_admin": false, 00:20:30.268 "nvme_io": false, 00:20:30.268 "nvme_io_md": false, 00:20:30.268 "write_zeroes": true, 00:20:30.268 "zcopy": true, 00:20:30.268 "get_zone_info": false, 00:20:30.268 "zone_management": false, 00:20:30.268 "zone_append": false, 00:20:30.268 "compare": false, 00:20:30.268 "compare_and_write": false, 00:20:30.268 "abort": true, 00:20:30.268 "seek_hole": false, 00:20:30.268 "seek_data": false, 00:20:30.268 "copy": true, 00:20:30.268 "nvme_iov_md": false 00:20:30.268 }, 00:20:30.268 "memory_domains": [ 00:20:30.268 { 00:20:30.268 "dma_device_id": "system", 00:20:30.268 "dma_device_type": 1 00:20:30.268 }, 00:20:30.268 { 00:20:30.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.268 "dma_device_type": 2 00:20:30.268 } 00:20:30.268 ], 00:20:30.268 "driver_specific": {} 00:20:30.268 } 00:20:30.268 ] 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.268 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.269 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.269 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.269 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.269 "name": "Existed_Raid", 00:20:30.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.269 "strip_size_kb": 64, 00:20:30.269 "state": "configuring", 00:20:30.269 "raid_level": "raid5f", 00:20:30.269 "superblock": false, 00:20:30.269 "num_base_bdevs": 4, 00:20:30.269 "num_base_bdevs_discovered": 3, 00:20:30.269 "num_base_bdevs_operational": 4, 00:20:30.269 "base_bdevs_list": [ 00:20:30.269 { 00:20:30.269 "name": "BaseBdev1", 00:20:30.269 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:30.269 "is_configured": true, 00:20:30.269 "data_offset": 0, 00:20:30.269 "data_size": 65536 00:20:30.269 }, 00:20:30.269 { 00:20:30.269 "name": "BaseBdev2", 00:20:30.269 "uuid": "6f589f36-91a0-49ce-9696-6415ebe1aeaa", 00:20:30.269 "is_configured": true, 00:20:30.269 "data_offset": 0, 00:20:30.269 "data_size": 65536 00:20:30.269 }, 00:20:30.269 { 00:20:30.269 "name": "BaseBdev3", 00:20:30.269 "uuid": "e6f5fcf8-57dc-4076-aa68-87f9bdebc6ba", 00:20:30.269 "is_configured": true, 00:20:30.269 "data_offset": 0, 00:20:30.269 "data_size": 65536 00:20:30.269 }, 00:20:30.269 { 00:20:30.269 "name": "BaseBdev4", 00:20:30.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.269 "is_configured": false, 00:20:30.269 "data_offset": 0, 00:20:30.269 "data_size": 0 00:20:30.269 } 00:20:30.269 ] 00:20:30.269 }' 00:20:30.269 14:20:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.269 14:20:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.835 [2024-11-27 14:20:01.174293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:30.835 [2024-11-27 14:20:01.174436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:30.835 [2024-11-27 14:20:01.174459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:30.835 [2024-11-27 14:20:01.174937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:30.835 [2024-11-27 14:20:01.185322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:30.835 [2024-11-27 14:20:01.185388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:30.835 [2024-11-27 14:20:01.185916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.835 BaseBdev4 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.835 [ 00:20:30.835 { 00:20:30.835 "name": "BaseBdev4", 00:20:30.835 "aliases": [ 00:20:30.835 "53e7b38b-db7a-4d53-a43f-0fafa5a6433a" 00:20:30.835 ], 00:20:30.835 "product_name": "Malloc disk", 00:20:30.835 "block_size": 512, 00:20:30.835 "num_blocks": 65536, 00:20:30.835 "uuid": "53e7b38b-db7a-4d53-a43f-0fafa5a6433a", 00:20:30.835 "assigned_rate_limits": { 00:20:30.835 "rw_ios_per_sec": 0, 00:20:30.835 "rw_mbytes_per_sec": 0, 00:20:30.835 "r_mbytes_per_sec": 0, 00:20:30.835 "w_mbytes_per_sec": 0 00:20:30.835 }, 00:20:30.835 "claimed": true, 00:20:30.835 "claim_type": "exclusive_write", 00:20:30.835 "zoned": false, 00:20:30.835 "supported_io_types": { 00:20:30.835 "read": true, 00:20:30.835 "write": true, 00:20:30.835 "unmap": true, 00:20:30.835 "flush": true, 00:20:30.835 "reset": true, 00:20:30.835 "nvme_admin": false, 00:20:30.835 "nvme_io": false, 00:20:30.835 "nvme_io_md": false, 00:20:30.835 "write_zeroes": true, 00:20:30.835 "zcopy": true, 00:20:30.835 "get_zone_info": false, 00:20:30.835 "zone_management": false, 00:20:30.835 "zone_append": false, 00:20:30.835 "compare": false, 00:20:30.835 "compare_and_write": false, 00:20:30.835 "abort": true, 00:20:30.835 "seek_hole": false, 00:20:30.835 "seek_data": false, 00:20:30.835 "copy": true, 00:20:30.835 "nvme_iov_md": false 00:20:30.835 }, 00:20:30.835 "memory_domains": [ 00:20:30.835 { 00:20:30.835 "dma_device_id": "system", 00:20:30.835 "dma_device_type": 1 00:20:30.835 }, 00:20:30.835 { 00:20:30.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.835 "dma_device_type": 2 00:20:30.835 } 00:20:30.835 ], 00:20:30.835 "driver_specific": {} 00:20:30.835 } 00:20:30.835 ] 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.835 "name": "Existed_Raid", 00:20:30.835 "uuid": "270484f8-cfad-45ae-8b33-2ee661a4ebbe", 00:20:30.835 "strip_size_kb": 64, 00:20:30.835 "state": "online", 00:20:30.835 "raid_level": "raid5f", 00:20:30.835 "superblock": false, 00:20:30.835 "num_base_bdevs": 4, 00:20:30.835 "num_base_bdevs_discovered": 4, 00:20:30.835 "num_base_bdevs_operational": 4, 00:20:30.835 "base_bdevs_list": [ 00:20:30.835 { 00:20:30.835 "name": "BaseBdev1", 00:20:30.835 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:30.835 "is_configured": true, 00:20:30.835 "data_offset": 0, 00:20:30.835 "data_size": 65536 00:20:30.835 }, 00:20:30.835 { 00:20:30.835 "name": "BaseBdev2", 00:20:30.835 "uuid": "6f589f36-91a0-49ce-9696-6415ebe1aeaa", 00:20:30.835 "is_configured": true, 00:20:30.835 "data_offset": 0, 00:20:30.835 "data_size": 65536 00:20:30.835 }, 00:20:30.835 { 00:20:30.835 "name": "BaseBdev3", 00:20:30.835 "uuid": "e6f5fcf8-57dc-4076-aa68-87f9bdebc6ba", 00:20:30.835 "is_configured": true, 00:20:30.835 "data_offset": 0, 00:20:30.835 "data_size": 65536 00:20:30.835 }, 00:20:30.835 { 00:20:30.835 "name": "BaseBdev4", 00:20:30.835 "uuid": "53e7b38b-db7a-4d53-a43f-0fafa5a6433a", 00:20:30.835 "is_configured": true, 00:20:30.835 "data_offset": 0, 00:20:30.835 "data_size": 65536 00:20:30.835 } 00:20:30.835 ] 00:20:30.835 }' 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.835 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.401 [2024-11-27 14:20:01.762724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.401 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.401 "name": "Existed_Raid", 00:20:31.401 "aliases": [ 00:20:31.401 "270484f8-cfad-45ae-8b33-2ee661a4ebbe" 00:20:31.401 ], 00:20:31.401 "product_name": "Raid Volume", 00:20:31.401 "block_size": 512, 00:20:31.401 "num_blocks": 196608, 00:20:31.401 "uuid": "270484f8-cfad-45ae-8b33-2ee661a4ebbe", 00:20:31.401 "assigned_rate_limits": { 00:20:31.401 "rw_ios_per_sec": 0, 00:20:31.401 "rw_mbytes_per_sec": 0, 00:20:31.401 "r_mbytes_per_sec": 0, 00:20:31.401 "w_mbytes_per_sec": 0 00:20:31.401 }, 00:20:31.401 "claimed": false, 00:20:31.401 "zoned": false, 00:20:31.401 "supported_io_types": { 00:20:31.401 "read": true, 00:20:31.401 "write": true, 00:20:31.401 "unmap": false, 00:20:31.401 "flush": false, 00:20:31.401 "reset": true, 00:20:31.401 "nvme_admin": false, 00:20:31.401 "nvme_io": false, 00:20:31.401 "nvme_io_md": false, 00:20:31.401 "write_zeroes": true, 00:20:31.401 "zcopy": false, 00:20:31.401 "get_zone_info": false, 00:20:31.401 "zone_management": false, 00:20:31.401 "zone_append": false, 00:20:31.401 "compare": false, 00:20:31.401 "compare_and_write": false, 00:20:31.401 "abort": false, 00:20:31.401 "seek_hole": false, 00:20:31.401 "seek_data": false, 00:20:31.401 "copy": false, 00:20:31.401 "nvme_iov_md": false 00:20:31.401 }, 00:20:31.401 "driver_specific": { 00:20:31.401 "raid": { 00:20:31.401 "uuid": "270484f8-cfad-45ae-8b33-2ee661a4ebbe", 00:20:31.401 "strip_size_kb": 64, 00:20:31.401 "state": "online", 00:20:31.401 "raid_level": "raid5f", 00:20:31.401 "superblock": false, 00:20:31.401 "num_base_bdevs": 4, 00:20:31.401 "num_base_bdevs_discovered": 4, 00:20:31.401 "num_base_bdevs_operational": 4, 00:20:31.401 "base_bdevs_list": [ 00:20:31.401 { 00:20:31.401 "name": "BaseBdev1", 00:20:31.401 "uuid": "9843c4de-609c-4830-81ea-e5842b18b405", 00:20:31.401 "is_configured": true, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 65536 00:20:31.401 }, 00:20:31.401 { 00:20:31.401 "name": "BaseBdev2", 00:20:31.401 "uuid": "6f589f36-91a0-49ce-9696-6415ebe1aeaa", 00:20:31.401 "is_configured": true, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 65536 00:20:31.401 }, 00:20:31.401 { 00:20:31.401 "name": "BaseBdev3", 00:20:31.401 "uuid": "e6f5fcf8-57dc-4076-aa68-87f9bdebc6ba", 00:20:31.401 "is_configured": true, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 65536 00:20:31.401 }, 00:20:31.401 { 00:20:31.401 "name": "BaseBdev4", 00:20:31.401 "uuid": "53e7b38b-db7a-4d53-a43f-0fafa5a6433a", 00:20:31.401 "is_configured": true, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 65536 00:20:31.401 } 00:20:31.401 ] 00:20:31.401 } 00:20:31.401 } 00:20:31.401 }' 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:31.402 BaseBdev2 00:20:31.402 BaseBdev3 00:20:31.402 BaseBdev4' 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.402 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.660 14:20:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.660 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 [2024-11-27 14:20:02.142568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.919 "name": "Existed_Raid", 00:20:31.919 "uuid": "270484f8-cfad-45ae-8b33-2ee661a4ebbe", 00:20:31.919 "strip_size_kb": 64, 00:20:31.919 "state": "online", 00:20:31.919 "raid_level": "raid5f", 00:20:31.919 "superblock": false, 00:20:31.919 "num_base_bdevs": 4, 00:20:31.919 "num_base_bdevs_discovered": 3, 00:20:31.919 "num_base_bdevs_operational": 3, 00:20:31.919 "base_bdevs_list": [ 00:20:31.919 { 00:20:31.919 "name": null, 00:20:31.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.919 "is_configured": false, 00:20:31.919 "data_offset": 0, 00:20:31.919 "data_size": 65536 00:20:31.919 }, 00:20:31.919 { 00:20:31.919 "name": "BaseBdev2", 00:20:31.919 "uuid": "6f589f36-91a0-49ce-9696-6415ebe1aeaa", 00:20:31.919 "is_configured": true, 00:20:31.919 "data_offset": 0, 00:20:31.919 "data_size": 65536 00:20:31.919 }, 00:20:31.919 { 00:20:31.919 "name": "BaseBdev3", 00:20:31.919 "uuid": "e6f5fcf8-57dc-4076-aa68-87f9bdebc6ba", 00:20:31.919 "is_configured": true, 00:20:31.919 "data_offset": 0, 00:20:31.919 "data_size": 65536 00:20:31.919 }, 00:20:31.919 { 00:20:31.919 "name": "BaseBdev4", 00:20:31.919 "uuid": "53e7b38b-db7a-4d53-a43f-0fafa5a6433a", 00:20:31.919 "is_configured": true, 00:20:31.919 "data_offset": 0, 00:20:31.919 "data_size": 65536 00:20:31.919 } 00:20:31.919 ] 00:20:31.919 }' 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.919 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.486 [2024-11-27 14:20:02.862021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:32.486 [2024-11-27 14:20:02.862176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.486 [2024-11-27 14:20:02.954500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.486 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.487 14:20:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.487 14:20:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.745 [2024-11-27 14:20:03.018532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.745 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.745 [2024-11-27 14:20:03.168451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:32.745 [2024-11-27 14:20:03.168520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.004 BaseBdev2 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:33.004 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.005 [ 00:20:33.005 { 00:20:33.005 "name": "BaseBdev2", 00:20:33.005 "aliases": [ 00:20:33.005 "891966fc-0d76-4b82-a9be-f620021273bf" 00:20:33.005 ], 00:20:33.005 "product_name": "Malloc disk", 00:20:33.005 "block_size": 512, 00:20:33.005 "num_blocks": 65536, 00:20:33.005 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:33.005 "assigned_rate_limits": { 00:20:33.005 "rw_ios_per_sec": 0, 00:20:33.005 "rw_mbytes_per_sec": 0, 00:20:33.005 "r_mbytes_per_sec": 0, 00:20:33.005 "w_mbytes_per_sec": 0 00:20:33.005 }, 00:20:33.005 "claimed": false, 00:20:33.005 "zoned": false, 00:20:33.005 "supported_io_types": { 00:20:33.005 "read": true, 00:20:33.005 "write": true, 00:20:33.005 "unmap": true, 00:20:33.005 "flush": true, 00:20:33.005 "reset": true, 00:20:33.005 "nvme_admin": false, 00:20:33.005 "nvme_io": false, 00:20:33.005 "nvme_io_md": false, 00:20:33.005 "write_zeroes": true, 00:20:33.005 "zcopy": true, 00:20:33.005 "get_zone_info": false, 00:20:33.005 "zone_management": false, 00:20:33.005 "zone_append": false, 00:20:33.005 "compare": false, 00:20:33.005 "compare_and_write": false, 00:20:33.005 "abort": true, 00:20:33.005 "seek_hole": false, 00:20:33.005 "seek_data": false, 00:20:33.005 "copy": true, 00:20:33.005 "nvme_iov_md": false 00:20:33.005 }, 00:20:33.005 "memory_domains": [ 00:20:33.005 { 00:20:33.005 "dma_device_id": "system", 00:20:33.005 "dma_device_type": 1 00:20:33.005 }, 00:20:33.005 { 00:20:33.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.005 "dma_device_type": 2 00:20:33.005 } 00:20:33.005 ], 00:20:33.005 "driver_specific": {} 00:20:33.005 } 00:20:33.005 ] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.005 BaseBdev3 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.005 [ 00:20:33.005 { 00:20:33.005 "name": "BaseBdev3", 00:20:33.005 "aliases": [ 00:20:33.005 "7af2a76c-3f95-4a73-80a4-13d9a0d454e6" 00:20:33.005 ], 00:20:33.005 "product_name": "Malloc disk", 00:20:33.005 "block_size": 512, 00:20:33.005 "num_blocks": 65536, 00:20:33.005 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:33.005 "assigned_rate_limits": { 00:20:33.005 "rw_ios_per_sec": 0, 00:20:33.005 "rw_mbytes_per_sec": 0, 00:20:33.005 "r_mbytes_per_sec": 0, 00:20:33.005 "w_mbytes_per_sec": 0 00:20:33.005 }, 00:20:33.005 "claimed": false, 00:20:33.005 "zoned": false, 00:20:33.005 "supported_io_types": { 00:20:33.005 "read": true, 00:20:33.005 "write": true, 00:20:33.005 "unmap": true, 00:20:33.005 "flush": true, 00:20:33.005 "reset": true, 00:20:33.005 "nvme_admin": false, 00:20:33.005 "nvme_io": false, 00:20:33.005 "nvme_io_md": false, 00:20:33.005 "write_zeroes": true, 00:20:33.005 "zcopy": true, 00:20:33.005 "get_zone_info": false, 00:20:33.005 "zone_management": false, 00:20:33.005 "zone_append": false, 00:20:33.005 "compare": false, 00:20:33.005 "compare_and_write": false, 00:20:33.005 "abort": true, 00:20:33.005 "seek_hole": false, 00:20:33.005 "seek_data": false, 00:20:33.005 "copy": true, 00:20:33.005 "nvme_iov_md": false 00:20:33.005 }, 00:20:33.005 "memory_domains": [ 00:20:33.005 { 00:20:33.005 "dma_device_id": "system", 00:20:33.005 "dma_device_type": 1 00:20:33.005 }, 00:20:33.005 { 00:20:33.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.005 "dma_device_type": 2 00:20:33.005 } 00:20:33.005 ], 00:20:33.005 "driver_specific": {} 00:20:33.005 } 00:20:33.005 ] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.005 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.264 BaseBdev4 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.264 [ 00:20:33.264 { 00:20:33.264 "name": "BaseBdev4", 00:20:33.264 "aliases": [ 00:20:33.264 "37887225-8211-40d6-b330-12ee0afc9619" 00:20:33.264 ], 00:20:33.264 "product_name": "Malloc disk", 00:20:33.264 "block_size": 512, 00:20:33.264 "num_blocks": 65536, 00:20:33.264 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:33.264 "assigned_rate_limits": { 00:20:33.264 "rw_ios_per_sec": 0, 00:20:33.264 "rw_mbytes_per_sec": 0, 00:20:33.264 "r_mbytes_per_sec": 0, 00:20:33.264 "w_mbytes_per_sec": 0 00:20:33.264 }, 00:20:33.264 "claimed": false, 00:20:33.264 "zoned": false, 00:20:33.264 "supported_io_types": { 00:20:33.264 "read": true, 00:20:33.264 "write": true, 00:20:33.264 "unmap": true, 00:20:33.264 "flush": true, 00:20:33.264 "reset": true, 00:20:33.264 "nvme_admin": false, 00:20:33.264 "nvme_io": false, 00:20:33.264 "nvme_io_md": false, 00:20:33.264 "write_zeroes": true, 00:20:33.264 "zcopy": true, 00:20:33.264 "get_zone_info": false, 00:20:33.264 "zone_management": false, 00:20:33.264 "zone_append": false, 00:20:33.264 "compare": false, 00:20:33.264 "compare_and_write": false, 00:20:33.264 "abort": true, 00:20:33.264 "seek_hole": false, 00:20:33.264 "seek_data": false, 00:20:33.264 "copy": true, 00:20:33.264 "nvme_iov_md": false 00:20:33.264 }, 00:20:33.264 "memory_domains": [ 00:20:33.264 { 00:20:33.264 "dma_device_id": "system", 00:20:33.264 "dma_device_type": 1 00:20:33.264 }, 00:20:33.264 { 00:20:33.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.264 "dma_device_type": 2 00:20:33.264 } 00:20:33.264 ], 00:20:33.264 "driver_specific": {} 00:20:33.264 } 00:20:33.264 ] 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:33.264 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.265 [2024-11-27 14:20:03.569046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:33.265 [2024-11-27 14:20:03.569117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:33.265 [2024-11-27 14:20:03.569158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:33.265 [2024-11-27 14:20:03.572394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:33.265 [2024-11-27 14:20:03.572484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.265 "name": "Existed_Raid", 00:20:33.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.265 "strip_size_kb": 64, 00:20:33.265 "state": "configuring", 00:20:33.265 "raid_level": "raid5f", 00:20:33.265 "superblock": false, 00:20:33.265 "num_base_bdevs": 4, 00:20:33.265 "num_base_bdevs_discovered": 3, 00:20:33.265 "num_base_bdevs_operational": 4, 00:20:33.265 "base_bdevs_list": [ 00:20:33.265 { 00:20:33.265 "name": "BaseBdev1", 00:20:33.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.265 "is_configured": false, 00:20:33.265 "data_offset": 0, 00:20:33.265 "data_size": 0 00:20:33.265 }, 00:20:33.265 { 00:20:33.265 "name": "BaseBdev2", 00:20:33.265 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:33.265 "is_configured": true, 00:20:33.265 "data_offset": 0, 00:20:33.265 "data_size": 65536 00:20:33.265 }, 00:20:33.265 { 00:20:33.265 "name": "BaseBdev3", 00:20:33.265 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:33.265 "is_configured": true, 00:20:33.265 "data_offset": 0, 00:20:33.265 "data_size": 65536 00:20:33.265 }, 00:20:33.265 { 00:20:33.265 "name": "BaseBdev4", 00:20:33.265 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:33.265 "is_configured": true, 00:20:33.265 "data_offset": 0, 00:20:33.265 "data_size": 65536 00:20:33.265 } 00:20:33.265 ] 00:20:33.265 }' 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.265 14:20:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.832 [2024-11-27 14:20:04.085143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.832 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.832 "name": "Existed_Raid", 00:20:33.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.832 "strip_size_kb": 64, 00:20:33.832 "state": "configuring", 00:20:33.832 "raid_level": "raid5f", 00:20:33.832 "superblock": false, 00:20:33.832 "num_base_bdevs": 4, 00:20:33.832 "num_base_bdevs_discovered": 2, 00:20:33.832 "num_base_bdevs_operational": 4, 00:20:33.832 "base_bdevs_list": [ 00:20:33.832 { 00:20:33.832 "name": "BaseBdev1", 00:20:33.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.832 "is_configured": false, 00:20:33.832 "data_offset": 0, 00:20:33.832 "data_size": 0 00:20:33.832 }, 00:20:33.832 { 00:20:33.832 "name": null, 00:20:33.832 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:33.832 "is_configured": false, 00:20:33.832 "data_offset": 0, 00:20:33.832 "data_size": 65536 00:20:33.832 }, 00:20:33.832 { 00:20:33.832 "name": "BaseBdev3", 00:20:33.832 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:33.832 "is_configured": true, 00:20:33.832 "data_offset": 0, 00:20:33.832 "data_size": 65536 00:20:33.832 }, 00:20:33.832 { 00:20:33.832 "name": "BaseBdev4", 00:20:33.832 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:33.832 "is_configured": true, 00:20:33.832 "data_offset": 0, 00:20:33.832 "data_size": 65536 00:20:33.832 } 00:20:33.832 ] 00:20:33.832 }' 00:20:33.833 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.833 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.090 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.090 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.090 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.090 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.349 [2024-11-27 14:20:04.684011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.349 BaseBdev1 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.349 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.350 [ 00:20:34.350 { 00:20:34.350 "name": "BaseBdev1", 00:20:34.350 "aliases": [ 00:20:34.350 "9df7e166-be8d-4169-a8f0-fa12d73cd94a" 00:20:34.350 ], 00:20:34.350 "product_name": "Malloc disk", 00:20:34.350 "block_size": 512, 00:20:34.350 "num_blocks": 65536, 00:20:34.350 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:34.350 "assigned_rate_limits": { 00:20:34.350 "rw_ios_per_sec": 0, 00:20:34.350 "rw_mbytes_per_sec": 0, 00:20:34.350 "r_mbytes_per_sec": 0, 00:20:34.350 "w_mbytes_per_sec": 0 00:20:34.350 }, 00:20:34.350 "claimed": true, 00:20:34.350 "claim_type": "exclusive_write", 00:20:34.350 "zoned": false, 00:20:34.350 "supported_io_types": { 00:20:34.350 "read": true, 00:20:34.350 "write": true, 00:20:34.350 "unmap": true, 00:20:34.350 "flush": true, 00:20:34.350 "reset": true, 00:20:34.350 "nvme_admin": false, 00:20:34.350 "nvme_io": false, 00:20:34.350 "nvme_io_md": false, 00:20:34.350 "write_zeroes": true, 00:20:34.350 "zcopy": true, 00:20:34.350 "get_zone_info": false, 00:20:34.350 "zone_management": false, 00:20:34.350 "zone_append": false, 00:20:34.350 "compare": false, 00:20:34.350 "compare_and_write": false, 00:20:34.350 "abort": true, 00:20:34.350 "seek_hole": false, 00:20:34.350 "seek_data": false, 00:20:34.350 "copy": true, 00:20:34.350 "nvme_iov_md": false 00:20:34.350 }, 00:20:34.350 "memory_domains": [ 00:20:34.350 { 00:20:34.350 "dma_device_id": "system", 00:20:34.350 "dma_device_type": 1 00:20:34.350 }, 00:20:34.350 { 00:20:34.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.350 "dma_device_type": 2 00:20:34.350 } 00:20:34.350 ], 00:20:34.350 "driver_specific": {} 00:20:34.350 } 00:20:34.350 ] 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.350 "name": "Existed_Raid", 00:20:34.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.350 "strip_size_kb": 64, 00:20:34.350 "state": "configuring", 00:20:34.350 "raid_level": "raid5f", 00:20:34.350 "superblock": false, 00:20:34.350 "num_base_bdevs": 4, 00:20:34.350 "num_base_bdevs_discovered": 3, 00:20:34.350 "num_base_bdevs_operational": 4, 00:20:34.350 "base_bdevs_list": [ 00:20:34.350 { 00:20:34.350 "name": "BaseBdev1", 00:20:34.350 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:34.350 "is_configured": true, 00:20:34.350 "data_offset": 0, 00:20:34.350 "data_size": 65536 00:20:34.350 }, 00:20:34.350 { 00:20:34.350 "name": null, 00:20:34.350 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:34.350 "is_configured": false, 00:20:34.350 "data_offset": 0, 00:20:34.350 "data_size": 65536 00:20:34.350 }, 00:20:34.350 { 00:20:34.350 "name": "BaseBdev3", 00:20:34.350 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:34.350 "is_configured": true, 00:20:34.350 "data_offset": 0, 00:20:34.350 "data_size": 65536 00:20:34.350 }, 00:20:34.350 { 00:20:34.350 "name": "BaseBdev4", 00:20:34.350 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:34.350 "is_configured": true, 00:20:34.350 "data_offset": 0, 00:20:34.350 "data_size": 65536 00:20:34.350 } 00:20:34.350 ] 00:20:34.350 }' 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.350 14:20:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.915 [2024-11-27 14:20:05.296238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.915 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.915 "name": "Existed_Raid", 00:20:34.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.915 "strip_size_kb": 64, 00:20:34.915 "state": "configuring", 00:20:34.915 "raid_level": "raid5f", 00:20:34.915 "superblock": false, 00:20:34.915 "num_base_bdevs": 4, 00:20:34.915 "num_base_bdevs_discovered": 2, 00:20:34.915 "num_base_bdevs_operational": 4, 00:20:34.915 "base_bdevs_list": [ 00:20:34.915 { 00:20:34.915 "name": "BaseBdev1", 00:20:34.915 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:34.915 "is_configured": true, 00:20:34.915 "data_offset": 0, 00:20:34.915 "data_size": 65536 00:20:34.915 }, 00:20:34.915 { 00:20:34.915 "name": null, 00:20:34.915 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:34.915 "is_configured": false, 00:20:34.915 "data_offset": 0, 00:20:34.915 "data_size": 65536 00:20:34.915 }, 00:20:34.916 { 00:20:34.916 "name": null, 00:20:34.916 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:34.916 "is_configured": false, 00:20:34.916 "data_offset": 0, 00:20:34.916 "data_size": 65536 00:20:34.916 }, 00:20:34.916 { 00:20:34.916 "name": "BaseBdev4", 00:20:34.916 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:34.916 "is_configured": true, 00:20:34.916 "data_offset": 0, 00:20:34.916 "data_size": 65536 00:20:34.916 } 00:20:34.916 ] 00:20:34.916 }' 00:20:34.916 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.916 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.481 [2024-11-27 14:20:05.952392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.481 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.482 14:20:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.740 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.740 "name": "Existed_Raid", 00:20:35.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.740 "strip_size_kb": 64, 00:20:35.740 "state": "configuring", 00:20:35.740 "raid_level": "raid5f", 00:20:35.740 "superblock": false, 00:20:35.740 "num_base_bdevs": 4, 00:20:35.740 "num_base_bdevs_discovered": 3, 00:20:35.740 "num_base_bdevs_operational": 4, 00:20:35.740 "base_bdevs_list": [ 00:20:35.740 { 00:20:35.740 "name": "BaseBdev1", 00:20:35.740 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:35.740 "is_configured": true, 00:20:35.740 "data_offset": 0, 00:20:35.740 "data_size": 65536 00:20:35.740 }, 00:20:35.740 { 00:20:35.740 "name": null, 00:20:35.740 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:35.740 "is_configured": false, 00:20:35.740 "data_offset": 0, 00:20:35.740 "data_size": 65536 00:20:35.740 }, 00:20:35.740 { 00:20:35.740 "name": "BaseBdev3", 00:20:35.740 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:35.740 "is_configured": true, 00:20:35.740 "data_offset": 0, 00:20:35.740 "data_size": 65536 00:20:35.740 }, 00:20:35.740 { 00:20:35.740 "name": "BaseBdev4", 00:20:35.740 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:35.740 "is_configured": true, 00:20:35.740 "data_offset": 0, 00:20:35.740 "data_size": 65536 00:20:35.740 } 00:20:35.740 ] 00:20:35.740 }' 00:20:35.740 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.740 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.308 [2024-11-27 14:20:06.568642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.308 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.308 "name": "Existed_Raid", 00:20:36.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.308 "strip_size_kb": 64, 00:20:36.308 "state": "configuring", 00:20:36.308 "raid_level": "raid5f", 00:20:36.308 "superblock": false, 00:20:36.308 "num_base_bdevs": 4, 00:20:36.308 "num_base_bdevs_discovered": 2, 00:20:36.308 "num_base_bdevs_operational": 4, 00:20:36.308 "base_bdevs_list": [ 00:20:36.308 { 00:20:36.308 "name": null, 00:20:36.308 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:36.308 "is_configured": false, 00:20:36.308 "data_offset": 0, 00:20:36.308 "data_size": 65536 00:20:36.308 }, 00:20:36.308 { 00:20:36.308 "name": null, 00:20:36.308 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:36.308 "is_configured": false, 00:20:36.309 "data_offset": 0, 00:20:36.309 "data_size": 65536 00:20:36.309 }, 00:20:36.309 { 00:20:36.309 "name": "BaseBdev3", 00:20:36.309 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:36.309 "is_configured": true, 00:20:36.309 "data_offset": 0, 00:20:36.309 "data_size": 65536 00:20:36.309 }, 00:20:36.309 { 00:20:36.309 "name": "BaseBdev4", 00:20:36.309 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:36.309 "is_configured": true, 00:20:36.309 "data_offset": 0, 00:20:36.309 "data_size": 65536 00:20:36.309 } 00:20:36.309 ] 00:20:36.309 }' 00:20:36.309 14:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.309 14:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.874 [2024-11-27 14:20:07.265300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.874 "name": "Existed_Raid", 00:20:36.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.874 "strip_size_kb": 64, 00:20:36.874 "state": "configuring", 00:20:36.874 "raid_level": "raid5f", 00:20:36.874 "superblock": false, 00:20:36.874 "num_base_bdevs": 4, 00:20:36.874 "num_base_bdevs_discovered": 3, 00:20:36.874 "num_base_bdevs_operational": 4, 00:20:36.874 "base_bdevs_list": [ 00:20:36.874 { 00:20:36.874 "name": null, 00:20:36.874 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:36.874 "is_configured": false, 00:20:36.874 "data_offset": 0, 00:20:36.874 "data_size": 65536 00:20:36.874 }, 00:20:36.874 { 00:20:36.874 "name": "BaseBdev2", 00:20:36.874 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:36.874 "is_configured": true, 00:20:36.874 "data_offset": 0, 00:20:36.874 "data_size": 65536 00:20:36.874 }, 00:20:36.874 { 00:20:36.874 "name": "BaseBdev3", 00:20:36.874 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:36.874 "is_configured": true, 00:20:36.874 "data_offset": 0, 00:20:36.874 "data_size": 65536 00:20:36.874 }, 00:20:36.874 { 00:20:36.874 "name": "BaseBdev4", 00:20:36.874 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:36.874 "is_configured": true, 00:20:36.874 "data_offset": 0, 00:20:36.874 "data_size": 65536 00:20:36.874 } 00:20:36.874 ] 00:20:36.874 }' 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.874 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9df7e166-be8d-4169-a8f0-fa12d73cd94a 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 [2024-11-27 14:20:07.933581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:37.441 [2024-11-27 14:20:07.933669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:37.441 [2024-11-27 14:20:07.933684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:37.441 [2024-11-27 14:20:07.934096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:37.441 [2024-11-27 14:20:07.941310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:37.441 [2024-11-27 14:20:07.941349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:37.441 NewBaseBdev 00:20:37.441 [2024-11-27 14:20:07.941746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.699 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.699 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:37.699 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.699 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.699 [ 00:20:37.699 { 00:20:37.699 "name": "NewBaseBdev", 00:20:37.699 "aliases": [ 00:20:37.699 "9df7e166-be8d-4169-a8f0-fa12d73cd94a" 00:20:37.699 ], 00:20:37.699 "product_name": "Malloc disk", 00:20:37.699 "block_size": 512, 00:20:37.699 "num_blocks": 65536, 00:20:37.699 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:37.699 "assigned_rate_limits": { 00:20:37.699 "rw_ios_per_sec": 0, 00:20:37.699 "rw_mbytes_per_sec": 0, 00:20:37.699 "r_mbytes_per_sec": 0, 00:20:37.699 "w_mbytes_per_sec": 0 00:20:37.699 }, 00:20:37.699 "claimed": true, 00:20:37.699 "claim_type": "exclusive_write", 00:20:37.699 "zoned": false, 00:20:37.699 "supported_io_types": { 00:20:37.699 "read": true, 00:20:37.699 "write": true, 00:20:37.699 "unmap": true, 00:20:37.699 "flush": true, 00:20:37.699 "reset": true, 00:20:37.699 "nvme_admin": false, 00:20:37.699 "nvme_io": false, 00:20:37.699 "nvme_io_md": false, 00:20:37.699 "write_zeroes": true, 00:20:37.700 "zcopy": true, 00:20:37.700 "get_zone_info": false, 00:20:37.700 "zone_management": false, 00:20:37.700 "zone_append": false, 00:20:37.700 "compare": false, 00:20:37.700 "compare_and_write": false, 00:20:37.700 "abort": true, 00:20:37.700 "seek_hole": false, 00:20:37.700 "seek_data": false, 00:20:37.700 "copy": true, 00:20:37.700 "nvme_iov_md": false 00:20:37.700 }, 00:20:37.700 "memory_domains": [ 00:20:37.700 { 00:20:37.700 "dma_device_id": "system", 00:20:37.700 "dma_device_type": 1 00:20:37.700 }, 00:20:37.700 { 00:20:37.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.700 "dma_device_type": 2 00:20:37.700 } 00:20:37.700 ], 00:20:37.700 "driver_specific": {} 00:20:37.700 } 00:20:37.700 ] 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.700 14:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.700 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.700 "name": "Existed_Raid", 00:20:37.700 "uuid": "29ba6695-8580-41fc-83a4-1c7f6fa6b592", 00:20:37.700 "strip_size_kb": 64, 00:20:37.700 "state": "online", 00:20:37.700 "raid_level": "raid5f", 00:20:37.700 "superblock": false, 00:20:37.700 "num_base_bdevs": 4, 00:20:37.700 "num_base_bdevs_discovered": 4, 00:20:37.700 "num_base_bdevs_operational": 4, 00:20:37.700 "base_bdevs_list": [ 00:20:37.700 { 00:20:37.700 "name": "NewBaseBdev", 00:20:37.700 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:37.700 "is_configured": true, 00:20:37.700 "data_offset": 0, 00:20:37.700 "data_size": 65536 00:20:37.700 }, 00:20:37.700 { 00:20:37.700 "name": "BaseBdev2", 00:20:37.700 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:37.700 "is_configured": true, 00:20:37.700 "data_offset": 0, 00:20:37.700 "data_size": 65536 00:20:37.700 }, 00:20:37.700 { 00:20:37.700 "name": "BaseBdev3", 00:20:37.700 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:37.700 "is_configured": true, 00:20:37.700 "data_offset": 0, 00:20:37.700 "data_size": 65536 00:20:37.700 }, 00:20:37.700 { 00:20:37.700 "name": "BaseBdev4", 00:20:37.700 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:37.700 "is_configured": true, 00:20:37.700 "data_offset": 0, 00:20:37.700 "data_size": 65536 00:20:37.700 } 00:20:37.700 ] 00:20:37.700 }' 00:20:37.700 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.700 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.266 [2024-11-27 14:20:08.510134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.266 "name": "Existed_Raid", 00:20:38.266 "aliases": [ 00:20:38.266 "29ba6695-8580-41fc-83a4-1c7f6fa6b592" 00:20:38.266 ], 00:20:38.266 "product_name": "Raid Volume", 00:20:38.266 "block_size": 512, 00:20:38.266 "num_blocks": 196608, 00:20:38.266 "uuid": "29ba6695-8580-41fc-83a4-1c7f6fa6b592", 00:20:38.266 "assigned_rate_limits": { 00:20:38.266 "rw_ios_per_sec": 0, 00:20:38.266 "rw_mbytes_per_sec": 0, 00:20:38.266 "r_mbytes_per_sec": 0, 00:20:38.266 "w_mbytes_per_sec": 0 00:20:38.266 }, 00:20:38.266 "claimed": false, 00:20:38.266 "zoned": false, 00:20:38.266 "supported_io_types": { 00:20:38.266 "read": true, 00:20:38.266 "write": true, 00:20:38.266 "unmap": false, 00:20:38.266 "flush": false, 00:20:38.266 "reset": true, 00:20:38.266 "nvme_admin": false, 00:20:38.266 "nvme_io": false, 00:20:38.266 "nvme_io_md": false, 00:20:38.266 "write_zeroes": true, 00:20:38.266 "zcopy": false, 00:20:38.266 "get_zone_info": false, 00:20:38.266 "zone_management": false, 00:20:38.266 "zone_append": false, 00:20:38.266 "compare": false, 00:20:38.266 "compare_and_write": false, 00:20:38.266 "abort": false, 00:20:38.266 "seek_hole": false, 00:20:38.266 "seek_data": false, 00:20:38.266 "copy": false, 00:20:38.266 "nvme_iov_md": false 00:20:38.266 }, 00:20:38.266 "driver_specific": { 00:20:38.266 "raid": { 00:20:38.266 "uuid": "29ba6695-8580-41fc-83a4-1c7f6fa6b592", 00:20:38.266 "strip_size_kb": 64, 00:20:38.266 "state": "online", 00:20:38.266 "raid_level": "raid5f", 00:20:38.266 "superblock": false, 00:20:38.266 "num_base_bdevs": 4, 00:20:38.266 "num_base_bdevs_discovered": 4, 00:20:38.266 "num_base_bdevs_operational": 4, 00:20:38.266 "base_bdevs_list": [ 00:20:38.266 { 00:20:38.266 "name": "NewBaseBdev", 00:20:38.266 "uuid": "9df7e166-be8d-4169-a8f0-fa12d73cd94a", 00:20:38.266 "is_configured": true, 00:20:38.266 "data_offset": 0, 00:20:38.266 "data_size": 65536 00:20:38.266 }, 00:20:38.266 { 00:20:38.266 "name": "BaseBdev2", 00:20:38.266 "uuid": "891966fc-0d76-4b82-a9be-f620021273bf", 00:20:38.266 "is_configured": true, 00:20:38.266 "data_offset": 0, 00:20:38.266 "data_size": 65536 00:20:38.266 }, 00:20:38.266 { 00:20:38.266 "name": "BaseBdev3", 00:20:38.266 "uuid": "7af2a76c-3f95-4a73-80a4-13d9a0d454e6", 00:20:38.266 "is_configured": true, 00:20:38.266 "data_offset": 0, 00:20:38.266 "data_size": 65536 00:20:38.266 }, 00:20:38.266 { 00:20:38.266 "name": "BaseBdev4", 00:20:38.266 "uuid": "37887225-8211-40d6-b330-12ee0afc9619", 00:20:38.266 "is_configured": true, 00:20:38.266 "data_offset": 0, 00:20:38.266 "data_size": 65536 00:20:38.266 } 00:20:38.266 ] 00:20:38.266 } 00:20:38.266 } 00:20:38.266 }' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:38.266 BaseBdev2 00:20:38.266 BaseBdev3 00:20:38.266 BaseBdev4' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.266 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.524 [2024-11-27 14:20:08.889782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:38.524 [2024-11-27 14:20:08.889834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.524 [2024-11-27 14:20:08.889939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.524 [2024-11-27 14:20:08.890320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.524 [2024-11-27 14:20:08.890345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83398 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83398 ']' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83398 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83398 00:20:38.524 killing process with pid 83398 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83398' 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83398 00:20:38.524 [2024-11-27 14:20:08.924983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:38.524 14:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83398 00:20:38.782 [2024-11-27 14:20:09.282243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:40.159 00:20:40.159 real 0m13.218s 00:20:40.159 user 0m21.882s 00:20:40.159 sys 0m1.816s 00:20:40.159 ************************************ 00:20:40.159 END TEST raid5f_state_function_test 00:20:40.159 ************************************ 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.159 14:20:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:20:40.159 14:20:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:40.159 14:20:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.159 14:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.159 ************************************ 00:20:40.159 START TEST raid5f_state_function_test_sb 00:20:40.159 ************************************ 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:40.159 Process raid pid: 84081 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84081 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84081' 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84081 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84081 ']' 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.159 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.160 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.160 14:20:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.160 [2024-11-27 14:20:10.549660] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:20:40.160 [2024-11-27 14:20:10.550186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.417 [2024-11-27 14:20:10.745092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.417 [2024-11-27 14:20:10.915746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.983 [2024-11-27 14:20:11.194145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.983 [2024-11-27 14:20:11.195559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.241 [2024-11-27 14:20:11.634262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.241 [2024-11-27 14:20:11.634499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.241 [2024-11-27 14:20:11.634537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.241 [2024-11-27 14:20:11.634569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.241 [2024-11-27 14:20:11.634579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:41.241 [2024-11-27 14:20:11.634593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:41.241 [2024-11-27 14:20:11.634603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:41.241 [2024-11-27 14:20:11.634617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.241 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.242 "name": "Existed_Raid", 00:20:41.242 "uuid": "1b0cbbca-3df4-4082-a21e-9cae6c75c4d5", 00:20:41.242 "strip_size_kb": 64, 00:20:41.242 "state": "configuring", 00:20:41.242 "raid_level": "raid5f", 00:20:41.242 "superblock": true, 00:20:41.242 "num_base_bdevs": 4, 00:20:41.242 "num_base_bdevs_discovered": 0, 00:20:41.242 "num_base_bdevs_operational": 4, 00:20:41.242 "base_bdevs_list": [ 00:20:41.242 { 00:20:41.242 "name": "BaseBdev1", 00:20:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.242 "is_configured": false, 00:20:41.242 "data_offset": 0, 00:20:41.242 "data_size": 0 00:20:41.242 }, 00:20:41.242 { 00:20:41.242 "name": "BaseBdev2", 00:20:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.242 "is_configured": false, 00:20:41.242 "data_offset": 0, 00:20:41.242 "data_size": 0 00:20:41.242 }, 00:20:41.242 { 00:20:41.242 "name": "BaseBdev3", 00:20:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.242 "is_configured": false, 00:20:41.242 "data_offset": 0, 00:20:41.242 "data_size": 0 00:20:41.242 }, 00:20:41.242 { 00:20:41.242 "name": "BaseBdev4", 00:20:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.242 "is_configured": false, 00:20:41.242 "data_offset": 0, 00:20:41.242 "data_size": 0 00:20:41.242 } 00:20:41.242 ] 00:20:41.242 }' 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.242 14:20:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.808 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:41.808 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 [2024-11-27 14:20:12.166311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:41.809 [2024-11-27 14:20:12.166372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 [2024-11-27 14:20:12.174307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.809 [2024-11-27 14:20:12.174364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.809 [2024-11-27 14:20:12.174380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.809 [2024-11-27 14:20:12.174397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.809 [2024-11-27 14:20:12.174407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:41.809 [2024-11-27 14:20:12.174421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:41.809 [2024-11-27 14:20:12.174431] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:41.809 [2024-11-27 14:20:12.174445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 [2024-11-27 14:20:12.223057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.809 BaseBdev1 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 [ 00:20:41.809 { 00:20:41.809 "name": "BaseBdev1", 00:20:41.809 "aliases": [ 00:20:41.809 "89edff56-5a67-4382-9e9c-f9bb2d74bf52" 00:20:41.809 ], 00:20:41.809 "product_name": "Malloc disk", 00:20:41.809 "block_size": 512, 00:20:41.809 "num_blocks": 65536, 00:20:41.809 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:41.809 "assigned_rate_limits": { 00:20:41.809 "rw_ios_per_sec": 0, 00:20:41.809 "rw_mbytes_per_sec": 0, 00:20:41.809 "r_mbytes_per_sec": 0, 00:20:41.809 "w_mbytes_per_sec": 0 00:20:41.809 }, 00:20:41.809 "claimed": true, 00:20:41.809 "claim_type": "exclusive_write", 00:20:41.809 "zoned": false, 00:20:41.809 "supported_io_types": { 00:20:41.809 "read": true, 00:20:41.809 "write": true, 00:20:41.809 "unmap": true, 00:20:41.809 "flush": true, 00:20:41.809 "reset": true, 00:20:41.809 "nvme_admin": false, 00:20:41.809 "nvme_io": false, 00:20:41.809 "nvme_io_md": false, 00:20:41.809 "write_zeroes": true, 00:20:41.809 "zcopy": true, 00:20:41.809 "get_zone_info": false, 00:20:41.809 "zone_management": false, 00:20:41.809 "zone_append": false, 00:20:41.809 "compare": false, 00:20:41.809 "compare_and_write": false, 00:20:41.809 "abort": true, 00:20:41.809 "seek_hole": false, 00:20:41.809 "seek_data": false, 00:20:41.809 "copy": true, 00:20:41.809 "nvme_iov_md": false 00:20:41.809 }, 00:20:41.809 "memory_domains": [ 00:20:41.809 { 00:20:41.809 "dma_device_id": "system", 00:20:41.809 "dma_device_type": 1 00:20:41.809 }, 00:20:41.809 { 00:20:41.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.809 "dma_device_type": 2 00:20:41.809 } 00:20:41.809 ], 00:20:41.809 "driver_specific": {} 00:20:41.809 } 00:20:41.809 ] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.809 "name": "Existed_Raid", 00:20:41.809 "uuid": "c07ada11-37cc-419e-b912-09c4950ba150", 00:20:41.809 "strip_size_kb": 64, 00:20:41.809 "state": "configuring", 00:20:41.809 "raid_level": "raid5f", 00:20:41.809 "superblock": true, 00:20:41.809 "num_base_bdevs": 4, 00:20:41.809 "num_base_bdevs_discovered": 1, 00:20:41.809 "num_base_bdevs_operational": 4, 00:20:41.809 "base_bdevs_list": [ 00:20:41.809 { 00:20:41.809 "name": "BaseBdev1", 00:20:41.809 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:41.809 "is_configured": true, 00:20:41.809 "data_offset": 2048, 00:20:41.809 "data_size": 63488 00:20:41.809 }, 00:20:41.809 { 00:20:41.809 "name": "BaseBdev2", 00:20:41.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.809 "is_configured": false, 00:20:41.809 "data_offset": 0, 00:20:41.809 "data_size": 0 00:20:41.809 }, 00:20:41.809 { 00:20:41.809 "name": "BaseBdev3", 00:20:41.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.809 "is_configured": false, 00:20:41.809 "data_offset": 0, 00:20:41.809 "data_size": 0 00:20:41.809 }, 00:20:41.809 { 00:20:41.809 "name": "BaseBdev4", 00:20:41.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.809 "is_configured": false, 00:20:41.809 "data_offset": 0, 00:20:41.809 "data_size": 0 00:20:41.809 } 00:20:41.809 ] 00:20:41.809 }' 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.809 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.375 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:42.375 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.375 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.376 [2024-11-27 14:20:12.799309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.376 [2024-11-27 14:20:12.799375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.376 [2024-11-27 14:20:12.807387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.376 [2024-11-27 14:20:12.810021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:42.376 [2024-11-27 14:20:12.810077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:42.376 [2024-11-27 14:20:12.810094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:42.376 [2024-11-27 14:20:12.810112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:42.376 [2024-11-27 14:20:12.810134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:42.376 [2024-11-27 14:20:12.810151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.376 "name": "Existed_Raid", 00:20:42.376 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:42.376 "strip_size_kb": 64, 00:20:42.376 "state": "configuring", 00:20:42.376 "raid_level": "raid5f", 00:20:42.376 "superblock": true, 00:20:42.376 "num_base_bdevs": 4, 00:20:42.376 "num_base_bdevs_discovered": 1, 00:20:42.376 "num_base_bdevs_operational": 4, 00:20:42.376 "base_bdevs_list": [ 00:20:42.376 { 00:20:42.376 "name": "BaseBdev1", 00:20:42.376 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:42.376 "is_configured": true, 00:20:42.376 "data_offset": 2048, 00:20:42.376 "data_size": 63488 00:20:42.376 }, 00:20:42.376 { 00:20:42.376 "name": "BaseBdev2", 00:20:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.376 "is_configured": false, 00:20:42.376 "data_offset": 0, 00:20:42.376 "data_size": 0 00:20:42.376 }, 00:20:42.376 { 00:20:42.376 "name": "BaseBdev3", 00:20:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.376 "is_configured": false, 00:20:42.376 "data_offset": 0, 00:20:42.376 "data_size": 0 00:20:42.376 }, 00:20:42.376 { 00:20:42.376 "name": "BaseBdev4", 00:20:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.376 "is_configured": false, 00:20:42.376 "data_offset": 0, 00:20:42.376 "data_size": 0 00:20:42.376 } 00:20:42.376 ] 00:20:42.376 }' 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.376 14:20:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 [2024-11-27 14:20:13.353965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:42.943 BaseBdev2 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 [ 00:20:42.943 { 00:20:42.943 "name": "BaseBdev2", 00:20:42.943 "aliases": [ 00:20:42.943 "d0a0ad3b-dec0-4036-8207-67e66a2d62aa" 00:20:42.943 ], 00:20:42.943 "product_name": "Malloc disk", 00:20:42.943 "block_size": 512, 00:20:42.943 "num_blocks": 65536, 00:20:42.943 "uuid": "d0a0ad3b-dec0-4036-8207-67e66a2d62aa", 00:20:42.943 "assigned_rate_limits": { 00:20:42.943 "rw_ios_per_sec": 0, 00:20:42.943 "rw_mbytes_per_sec": 0, 00:20:42.943 "r_mbytes_per_sec": 0, 00:20:42.943 "w_mbytes_per_sec": 0 00:20:42.943 }, 00:20:42.943 "claimed": true, 00:20:42.943 "claim_type": "exclusive_write", 00:20:42.943 "zoned": false, 00:20:42.943 "supported_io_types": { 00:20:42.943 "read": true, 00:20:42.943 "write": true, 00:20:42.943 "unmap": true, 00:20:42.943 "flush": true, 00:20:42.943 "reset": true, 00:20:42.943 "nvme_admin": false, 00:20:42.943 "nvme_io": false, 00:20:42.943 "nvme_io_md": false, 00:20:42.943 "write_zeroes": true, 00:20:42.943 "zcopy": true, 00:20:42.943 "get_zone_info": false, 00:20:42.943 "zone_management": false, 00:20:42.943 "zone_append": false, 00:20:42.943 "compare": false, 00:20:42.943 "compare_and_write": false, 00:20:42.943 "abort": true, 00:20:42.943 "seek_hole": false, 00:20:42.943 "seek_data": false, 00:20:42.943 "copy": true, 00:20:42.943 "nvme_iov_md": false 00:20:42.943 }, 00:20:42.943 "memory_domains": [ 00:20:42.943 { 00:20:42.943 "dma_device_id": "system", 00:20:42.943 "dma_device_type": 1 00:20:42.943 }, 00:20:42.943 { 00:20:42.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.943 "dma_device_type": 2 00:20:42.943 } 00:20:42.943 ], 00:20:42.943 "driver_specific": {} 00:20:42.943 } 00:20:42.943 ] 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.943 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.943 "name": "Existed_Raid", 00:20:42.943 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:42.943 "strip_size_kb": 64, 00:20:42.943 "state": "configuring", 00:20:42.944 "raid_level": "raid5f", 00:20:42.944 "superblock": true, 00:20:42.944 "num_base_bdevs": 4, 00:20:42.944 "num_base_bdevs_discovered": 2, 00:20:42.944 "num_base_bdevs_operational": 4, 00:20:42.944 "base_bdevs_list": [ 00:20:42.944 { 00:20:42.944 "name": "BaseBdev1", 00:20:42.944 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:42.944 "is_configured": true, 00:20:42.944 "data_offset": 2048, 00:20:42.944 "data_size": 63488 00:20:42.944 }, 00:20:42.944 { 00:20:42.944 "name": "BaseBdev2", 00:20:42.944 "uuid": "d0a0ad3b-dec0-4036-8207-67e66a2d62aa", 00:20:42.944 "is_configured": true, 00:20:42.944 "data_offset": 2048, 00:20:42.944 "data_size": 63488 00:20:42.944 }, 00:20:42.944 { 00:20:42.944 "name": "BaseBdev3", 00:20:42.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.944 "is_configured": false, 00:20:42.944 "data_offset": 0, 00:20:42.944 "data_size": 0 00:20:42.944 }, 00:20:42.944 { 00:20:42.944 "name": "BaseBdev4", 00:20:42.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.944 "is_configured": false, 00:20:42.944 "data_offset": 0, 00:20:42.944 "data_size": 0 00:20:42.944 } 00:20:42.944 ] 00:20:42.944 }' 00:20:42.944 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.944 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 [2024-11-27 14:20:13.958954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.511 BaseBdev3 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 [ 00:20:43.511 { 00:20:43.511 "name": "BaseBdev3", 00:20:43.511 "aliases": [ 00:20:43.511 "1b9c913b-c2f4-49a0-91ee-12133b35e5c1" 00:20:43.511 ], 00:20:43.511 "product_name": "Malloc disk", 00:20:43.511 "block_size": 512, 00:20:43.511 "num_blocks": 65536, 00:20:43.511 "uuid": "1b9c913b-c2f4-49a0-91ee-12133b35e5c1", 00:20:43.511 "assigned_rate_limits": { 00:20:43.511 "rw_ios_per_sec": 0, 00:20:43.511 "rw_mbytes_per_sec": 0, 00:20:43.511 "r_mbytes_per_sec": 0, 00:20:43.511 "w_mbytes_per_sec": 0 00:20:43.511 }, 00:20:43.511 "claimed": true, 00:20:43.511 "claim_type": "exclusive_write", 00:20:43.511 "zoned": false, 00:20:43.511 "supported_io_types": { 00:20:43.511 "read": true, 00:20:43.511 "write": true, 00:20:43.511 "unmap": true, 00:20:43.511 "flush": true, 00:20:43.511 "reset": true, 00:20:43.511 "nvme_admin": false, 00:20:43.511 "nvme_io": false, 00:20:43.511 "nvme_io_md": false, 00:20:43.511 "write_zeroes": true, 00:20:43.511 "zcopy": true, 00:20:43.511 "get_zone_info": false, 00:20:43.511 "zone_management": false, 00:20:43.511 "zone_append": false, 00:20:43.511 "compare": false, 00:20:43.511 "compare_and_write": false, 00:20:43.511 "abort": true, 00:20:43.511 "seek_hole": false, 00:20:43.511 "seek_data": false, 00:20:43.511 "copy": true, 00:20:43.511 "nvme_iov_md": false 00:20:43.511 }, 00:20:43.511 "memory_domains": [ 00:20:43.511 { 00:20:43.511 "dma_device_id": "system", 00:20:43.511 "dma_device_type": 1 00:20:43.511 }, 00:20:43.511 { 00:20:43.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.511 "dma_device_type": 2 00:20:43.511 } 00:20:43.511 ], 00:20:43.511 "driver_specific": {} 00:20:43.511 } 00:20:43.511 ] 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.511 14:20:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.769 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.769 "name": "Existed_Raid", 00:20:43.769 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:43.769 "strip_size_kb": 64, 00:20:43.769 "state": "configuring", 00:20:43.769 "raid_level": "raid5f", 00:20:43.769 "superblock": true, 00:20:43.769 "num_base_bdevs": 4, 00:20:43.769 "num_base_bdevs_discovered": 3, 00:20:43.769 "num_base_bdevs_operational": 4, 00:20:43.769 "base_bdevs_list": [ 00:20:43.769 { 00:20:43.769 "name": "BaseBdev1", 00:20:43.769 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:43.769 "is_configured": true, 00:20:43.769 "data_offset": 2048, 00:20:43.769 "data_size": 63488 00:20:43.769 }, 00:20:43.769 { 00:20:43.769 "name": "BaseBdev2", 00:20:43.769 "uuid": "d0a0ad3b-dec0-4036-8207-67e66a2d62aa", 00:20:43.769 "is_configured": true, 00:20:43.769 "data_offset": 2048, 00:20:43.769 "data_size": 63488 00:20:43.769 }, 00:20:43.769 { 00:20:43.769 "name": "BaseBdev3", 00:20:43.769 "uuid": "1b9c913b-c2f4-49a0-91ee-12133b35e5c1", 00:20:43.769 "is_configured": true, 00:20:43.769 "data_offset": 2048, 00:20:43.769 "data_size": 63488 00:20:43.769 }, 00:20:43.769 { 00:20:43.769 "name": "BaseBdev4", 00:20:43.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.769 "is_configured": false, 00:20:43.769 "data_offset": 0, 00:20:43.769 "data_size": 0 00:20:43.769 } 00:20:43.769 ] 00:20:43.769 }' 00:20:43.769 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.769 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.028 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:44.028 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.028 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.287 [2024-11-27 14:20:14.555614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:44.287 [2024-11-27 14:20:14.556033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:44.287 [2024-11-27 14:20:14.556055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:44.287 BaseBdev4 00:20:44.287 [2024-11-27 14:20:14.556387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.287 [2024-11-27 14:20:14.563660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:44.287 [2024-11-27 14:20:14.563863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:44.287 [2024-11-27 14:20:14.564314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.287 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.287 [ 00:20:44.287 { 00:20:44.287 "name": "BaseBdev4", 00:20:44.287 "aliases": [ 00:20:44.287 "26f601d9-20cc-4eeb-a9df-eae94d22f941" 00:20:44.287 ], 00:20:44.287 "product_name": "Malloc disk", 00:20:44.287 "block_size": 512, 00:20:44.287 "num_blocks": 65536, 00:20:44.287 "uuid": "26f601d9-20cc-4eeb-a9df-eae94d22f941", 00:20:44.287 "assigned_rate_limits": { 00:20:44.287 "rw_ios_per_sec": 0, 00:20:44.287 "rw_mbytes_per_sec": 0, 00:20:44.287 "r_mbytes_per_sec": 0, 00:20:44.287 "w_mbytes_per_sec": 0 00:20:44.287 }, 00:20:44.287 "claimed": true, 00:20:44.287 "claim_type": "exclusive_write", 00:20:44.287 "zoned": false, 00:20:44.287 "supported_io_types": { 00:20:44.287 "read": true, 00:20:44.287 "write": true, 00:20:44.287 "unmap": true, 00:20:44.287 "flush": true, 00:20:44.287 "reset": true, 00:20:44.287 "nvme_admin": false, 00:20:44.287 "nvme_io": false, 00:20:44.287 "nvme_io_md": false, 00:20:44.287 "write_zeroes": true, 00:20:44.287 "zcopy": true, 00:20:44.287 "get_zone_info": false, 00:20:44.287 "zone_management": false, 00:20:44.287 "zone_append": false, 00:20:44.287 "compare": false, 00:20:44.287 "compare_and_write": false, 00:20:44.287 "abort": true, 00:20:44.287 "seek_hole": false, 00:20:44.287 "seek_data": false, 00:20:44.287 "copy": true, 00:20:44.287 "nvme_iov_md": false 00:20:44.287 }, 00:20:44.287 "memory_domains": [ 00:20:44.287 { 00:20:44.287 "dma_device_id": "system", 00:20:44.287 "dma_device_type": 1 00:20:44.287 }, 00:20:44.287 { 00:20:44.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.287 "dma_device_type": 2 00:20:44.288 } 00:20:44.288 ], 00:20:44.288 "driver_specific": {} 00:20:44.288 } 00:20:44.288 ] 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.288 "name": "Existed_Raid", 00:20:44.288 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:44.288 "strip_size_kb": 64, 00:20:44.288 "state": "online", 00:20:44.288 "raid_level": "raid5f", 00:20:44.288 "superblock": true, 00:20:44.288 "num_base_bdevs": 4, 00:20:44.288 "num_base_bdevs_discovered": 4, 00:20:44.288 "num_base_bdevs_operational": 4, 00:20:44.288 "base_bdevs_list": [ 00:20:44.288 { 00:20:44.288 "name": "BaseBdev1", 00:20:44.288 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:44.288 "is_configured": true, 00:20:44.288 "data_offset": 2048, 00:20:44.288 "data_size": 63488 00:20:44.288 }, 00:20:44.288 { 00:20:44.288 "name": "BaseBdev2", 00:20:44.288 "uuid": "d0a0ad3b-dec0-4036-8207-67e66a2d62aa", 00:20:44.288 "is_configured": true, 00:20:44.288 "data_offset": 2048, 00:20:44.288 "data_size": 63488 00:20:44.288 }, 00:20:44.288 { 00:20:44.288 "name": "BaseBdev3", 00:20:44.288 "uuid": "1b9c913b-c2f4-49a0-91ee-12133b35e5c1", 00:20:44.288 "is_configured": true, 00:20:44.288 "data_offset": 2048, 00:20:44.288 "data_size": 63488 00:20:44.288 }, 00:20:44.288 { 00:20:44.288 "name": "BaseBdev4", 00:20:44.288 "uuid": "26f601d9-20cc-4eeb-a9df-eae94d22f941", 00:20:44.288 "is_configured": true, 00:20:44.288 "data_offset": 2048, 00:20:44.288 "data_size": 63488 00:20:44.288 } 00:20:44.288 ] 00:20:44.288 }' 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.288 14:20:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:44.854 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:44.855 [2024-11-27 14:20:15.124774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:44.855 "name": "Existed_Raid", 00:20:44.855 "aliases": [ 00:20:44.855 "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b" 00:20:44.855 ], 00:20:44.855 "product_name": "Raid Volume", 00:20:44.855 "block_size": 512, 00:20:44.855 "num_blocks": 190464, 00:20:44.855 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:44.855 "assigned_rate_limits": { 00:20:44.855 "rw_ios_per_sec": 0, 00:20:44.855 "rw_mbytes_per_sec": 0, 00:20:44.855 "r_mbytes_per_sec": 0, 00:20:44.855 "w_mbytes_per_sec": 0 00:20:44.855 }, 00:20:44.855 "claimed": false, 00:20:44.855 "zoned": false, 00:20:44.855 "supported_io_types": { 00:20:44.855 "read": true, 00:20:44.855 "write": true, 00:20:44.855 "unmap": false, 00:20:44.855 "flush": false, 00:20:44.855 "reset": true, 00:20:44.855 "nvme_admin": false, 00:20:44.855 "nvme_io": false, 00:20:44.855 "nvme_io_md": false, 00:20:44.855 "write_zeroes": true, 00:20:44.855 "zcopy": false, 00:20:44.855 "get_zone_info": false, 00:20:44.855 "zone_management": false, 00:20:44.855 "zone_append": false, 00:20:44.855 "compare": false, 00:20:44.855 "compare_and_write": false, 00:20:44.855 "abort": false, 00:20:44.855 "seek_hole": false, 00:20:44.855 "seek_data": false, 00:20:44.855 "copy": false, 00:20:44.855 "nvme_iov_md": false 00:20:44.855 }, 00:20:44.855 "driver_specific": { 00:20:44.855 "raid": { 00:20:44.855 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:44.855 "strip_size_kb": 64, 00:20:44.855 "state": "online", 00:20:44.855 "raid_level": "raid5f", 00:20:44.855 "superblock": true, 00:20:44.855 "num_base_bdevs": 4, 00:20:44.855 "num_base_bdevs_discovered": 4, 00:20:44.855 "num_base_bdevs_operational": 4, 00:20:44.855 "base_bdevs_list": [ 00:20:44.855 { 00:20:44.855 "name": "BaseBdev1", 00:20:44.855 "uuid": "89edff56-5a67-4382-9e9c-f9bb2d74bf52", 00:20:44.855 "is_configured": true, 00:20:44.855 "data_offset": 2048, 00:20:44.855 "data_size": 63488 00:20:44.855 }, 00:20:44.855 { 00:20:44.855 "name": "BaseBdev2", 00:20:44.855 "uuid": "d0a0ad3b-dec0-4036-8207-67e66a2d62aa", 00:20:44.855 "is_configured": true, 00:20:44.855 "data_offset": 2048, 00:20:44.855 "data_size": 63488 00:20:44.855 }, 00:20:44.855 { 00:20:44.855 "name": "BaseBdev3", 00:20:44.855 "uuid": "1b9c913b-c2f4-49a0-91ee-12133b35e5c1", 00:20:44.855 "is_configured": true, 00:20:44.855 "data_offset": 2048, 00:20:44.855 "data_size": 63488 00:20:44.855 }, 00:20:44.855 { 00:20:44.855 "name": "BaseBdev4", 00:20:44.855 "uuid": "26f601d9-20cc-4eeb-a9df-eae94d22f941", 00:20:44.855 "is_configured": true, 00:20:44.855 "data_offset": 2048, 00:20:44.855 "data_size": 63488 00:20:44.855 } 00:20:44.855 ] 00:20:44.855 } 00:20:44.855 } 00:20:44.855 }' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:44.855 BaseBdev2 00:20:44.855 BaseBdev3 00:20:44.855 BaseBdev4' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.855 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.114 [2024-11-27 14:20:15.500615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.114 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.115 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.115 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.115 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.115 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.115 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.374 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.374 "name": "Existed_Raid", 00:20:45.374 "uuid": "d3bbb4b3-4b92-4ab7-afbd-5d0f73be3b3b", 00:20:45.374 "strip_size_kb": 64, 00:20:45.374 "state": "online", 00:20:45.374 "raid_level": "raid5f", 00:20:45.374 "superblock": true, 00:20:45.374 "num_base_bdevs": 4, 00:20:45.374 "num_base_bdevs_discovered": 3, 00:20:45.374 "num_base_bdevs_operational": 3, 00:20:45.374 "base_bdevs_list": [ 00:20:45.374 { 00:20:45.374 "name": null, 00:20:45.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.374 "is_configured": false, 00:20:45.374 "data_offset": 0, 00:20:45.374 "data_size": 63488 00:20:45.374 }, 00:20:45.374 { 00:20:45.374 "name": "BaseBdev2", 00:20:45.374 "uuid": "d0a0ad3b-dec0-4036-8207-67e66a2d62aa", 00:20:45.374 "is_configured": true, 00:20:45.374 "data_offset": 2048, 00:20:45.374 "data_size": 63488 00:20:45.374 }, 00:20:45.374 { 00:20:45.374 "name": "BaseBdev3", 00:20:45.374 "uuid": "1b9c913b-c2f4-49a0-91ee-12133b35e5c1", 00:20:45.374 "is_configured": true, 00:20:45.374 "data_offset": 2048, 00:20:45.374 "data_size": 63488 00:20:45.374 }, 00:20:45.374 { 00:20:45.374 "name": "BaseBdev4", 00:20:45.374 "uuid": "26f601d9-20cc-4eeb-a9df-eae94d22f941", 00:20:45.374 "is_configured": true, 00:20:45.374 "data_offset": 2048, 00:20:45.374 "data_size": 63488 00:20:45.374 } 00:20:45.374 ] 00:20:45.374 }' 00:20:45.374 14:20:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.374 14:20:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.632 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.891 [2024-11-27 14:20:16.163793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:45.891 [2024-11-27 14:20:16.164039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.891 [2024-11-27 14:20:16.250132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.891 [2024-11-27 14:20:16.310194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:45.891 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.150 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.150 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:46.150 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.151 [2024-11-27 14:20:16.459566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:46.151 [2024-11-27 14:20:16.459638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.151 BaseBdev2 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.151 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.151 [ 00:20:46.151 { 00:20:46.151 "name": "BaseBdev2", 00:20:46.151 "aliases": [ 00:20:46.151 "8309484a-b8c7-4299-8b18-2c480c60ab9a" 00:20:46.151 ], 00:20:46.151 "product_name": "Malloc disk", 00:20:46.151 "block_size": 512, 00:20:46.151 "num_blocks": 65536, 00:20:46.151 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:46.151 "assigned_rate_limits": { 00:20:46.151 "rw_ios_per_sec": 0, 00:20:46.151 "rw_mbytes_per_sec": 0, 00:20:46.151 "r_mbytes_per_sec": 0, 00:20:46.151 "w_mbytes_per_sec": 0 00:20:46.151 }, 00:20:46.151 "claimed": false, 00:20:46.414 "zoned": false, 00:20:46.414 "supported_io_types": { 00:20:46.414 "read": true, 00:20:46.414 "write": true, 00:20:46.414 "unmap": true, 00:20:46.414 "flush": true, 00:20:46.414 "reset": true, 00:20:46.414 "nvme_admin": false, 00:20:46.414 "nvme_io": false, 00:20:46.414 "nvme_io_md": false, 00:20:46.414 "write_zeroes": true, 00:20:46.414 "zcopy": true, 00:20:46.414 "get_zone_info": false, 00:20:46.414 "zone_management": false, 00:20:46.414 "zone_append": false, 00:20:46.414 "compare": false, 00:20:46.414 "compare_and_write": false, 00:20:46.414 "abort": true, 00:20:46.414 "seek_hole": false, 00:20:46.414 "seek_data": false, 00:20:46.414 "copy": true, 00:20:46.414 "nvme_iov_md": false 00:20:46.414 }, 00:20:46.414 "memory_domains": [ 00:20:46.414 { 00:20:46.414 "dma_device_id": "system", 00:20:46.415 "dma_device_type": 1 00:20:46.415 }, 00:20:46.415 { 00:20:46.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.415 "dma_device_type": 2 00:20:46.415 } 00:20:46.415 ], 00:20:46.415 "driver_specific": {} 00:20:46.415 } 00:20:46.415 ] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 BaseBdev3 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 [ 00:20:46.415 { 00:20:46.415 "name": "BaseBdev3", 00:20:46.415 "aliases": [ 00:20:46.415 "bbd90330-f4f0-41f4-aa6c-702b084e639e" 00:20:46.415 ], 00:20:46.415 "product_name": "Malloc disk", 00:20:46.415 "block_size": 512, 00:20:46.415 "num_blocks": 65536, 00:20:46.415 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:46.415 "assigned_rate_limits": { 00:20:46.415 "rw_ios_per_sec": 0, 00:20:46.415 "rw_mbytes_per_sec": 0, 00:20:46.415 "r_mbytes_per_sec": 0, 00:20:46.415 "w_mbytes_per_sec": 0 00:20:46.415 }, 00:20:46.415 "claimed": false, 00:20:46.415 "zoned": false, 00:20:46.415 "supported_io_types": { 00:20:46.415 "read": true, 00:20:46.415 "write": true, 00:20:46.415 "unmap": true, 00:20:46.415 "flush": true, 00:20:46.415 "reset": true, 00:20:46.415 "nvme_admin": false, 00:20:46.415 "nvme_io": false, 00:20:46.415 "nvme_io_md": false, 00:20:46.415 "write_zeroes": true, 00:20:46.415 "zcopy": true, 00:20:46.415 "get_zone_info": false, 00:20:46.415 "zone_management": false, 00:20:46.415 "zone_append": false, 00:20:46.415 "compare": false, 00:20:46.415 "compare_and_write": false, 00:20:46.415 "abort": true, 00:20:46.415 "seek_hole": false, 00:20:46.415 "seek_data": false, 00:20:46.415 "copy": true, 00:20:46.415 "nvme_iov_md": false 00:20:46.415 }, 00:20:46.415 "memory_domains": [ 00:20:46.415 { 00:20:46.415 "dma_device_id": "system", 00:20:46.415 "dma_device_type": 1 00:20:46.415 }, 00:20:46.415 { 00:20:46.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.415 "dma_device_type": 2 00:20:46.415 } 00:20:46.415 ], 00:20:46.415 "driver_specific": {} 00:20:46.415 } 00:20:46.415 ] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 BaseBdev4 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 [ 00:20:46.415 { 00:20:46.415 "name": "BaseBdev4", 00:20:46.415 "aliases": [ 00:20:46.415 "07b77d56-f632-41e1-afd4-b1e451b12309" 00:20:46.415 ], 00:20:46.415 "product_name": "Malloc disk", 00:20:46.415 "block_size": 512, 00:20:46.415 "num_blocks": 65536, 00:20:46.415 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:46.415 "assigned_rate_limits": { 00:20:46.415 "rw_ios_per_sec": 0, 00:20:46.415 "rw_mbytes_per_sec": 0, 00:20:46.415 "r_mbytes_per_sec": 0, 00:20:46.415 "w_mbytes_per_sec": 0 00:20:46.415 }, 00:20:46.415 "claimed": false, 00:20:46.415 "zoned": false, 00:20:46.415 "supported_io_types": { 00:20:46.415 "read": true, 00:20:46.415 "write": true, 00:20:46.415 "unmap": true, 00:20:46.415 "flush": true, 00:20:46.415 "reset": true, 00:20:46.415 "nvme_admin": false, 00:20:46.415 "nvme_io": false, 00:20:46.415 "nvme_io_md": false, 00:20:46.415 "write_zeroes": true, 00:20:46.415 "zcopy": true, 00:20:46.415 "get_zone_info": false, 00:20:46.415 "zone_management": false, 00:20:46.415 "zone_append": false, 00:20:46.415 "compare": false, 00:20:46.415 "compare_and_write": false, 00:20:46.415 "abort": true, 00:20:46.415 "seek_hole": false, 00:20:46.415 "seek_data": false, 00:20:46.415 "copy": true, 00:20:46.415 "nvme_iov_md": false 00:20:46.415 }, 00:20:46.415 "memory_domains": [ 00:20:46.415 { 00:20:46.415 "dma_device_id": "system", 00:20:46.415 "dma_device_type": 1 00:20:46.415 }, 00:20:46.415 { 00:20:46.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.415 "dma_device_type": 2 00:20:46.415 } 00:20:46.415 ], 00:20:46.415 "driver_specific": {} 00:20:46.415 } 00:20:46.415 ] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.415 [2024-11-27 14:20:16.825786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.415 [2024-11-27 14:20:16.825992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.415 [2024-11-27 14:20:16.826041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.415 [2024-11-27 14:20:16.828553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.415 [2024-11-27 14:20:16.828626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.415 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.416 "name": "Existed_Raid", 00:20:46.416 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:46.416 "strip_size_kb": 64, 00:20:46.416 "state": "configuring", 00:20:46.416 "raid_level": "raid5f", 00:20:46.416 "superblock": true, 00:20:46.416 "num_base_bdevs": 4, 00:20:46.416 "num_base_bdevs_discovered": 3, 00:20:46.416 "num_base_bdevs_operational": 4, 00:20:46.416 "base_bdevs_list": [ 00:20:46.416 { 00:20:46.416 "name": "BaseBdev1", 00:20:46.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.416 "is_configured": false, 00:20:46.416 "data_offset": 0, 00:20:46.416 "data_size": 0 00:20:46.416 }, 00:20:46.416 { 00:20:46.416 "name": "BaseBdev2", 00:20:46.416 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:46.416 "is_configured": true, 00:20:46.416 "data_offset": 2048, 00:20:46.416 "data_size": 63488 00:20:46.416 }, 00:20:46.416 { 00:20:46.416 "name": "BaseBdev3", 00:20:46.416 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:46.416 "is_configured": true, 00:20:46.416 "data_offset": 2048, 00:20:46.416 "data_size": 63488 00:20:46.416 }, 00:20:46.416 { 00:20:46.416 "name": "BaseBdev4", 00:20:46.416 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:46.416 "is_configured": true, 00:20:46.416 "data_offset": 2048, 00:20:46.416 "data_size": 63488 00:20:46.416 } 00:20:46.416 ] 00:20:46.416 }' 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.416 14:20:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.996 [2024-11-27 14:20:17.377956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.996 "name": "Existed_Raid", 00:20:46.996 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:46.996 "strip_size_kb": 64, 00:20:46.996 "state": "configuring", 00:20:46.996 "raid_level": "raid5f", 00:20:46.996 "superblock": true, 00:20:46.996 "num_base_bdevs": 4, 00:20:46.996 "num_base_bdevs_discovered": 2, 00:20:46.996 "num_base_bdevs_operational": 4, 00:20:46.996 "base_bdevs_list": [ 00:20:46.996 { 00:20:46.996 "name": "BaseBdev1", 00:20:46.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.996 "is_configured": false, 00:20:46.996 "data_offset": 0, 00:20:46.996 "data_size": 0 00:20:46.996 }, 00:20:46.996 { 00:20:46.996 "name": null, 00:20:46.996 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:46.996 "is_configured": false, 00:20:46.996 "data_offset": 0, 00:20:46.996 "data_size": 63488 00:20:46.996 }, 00:20:46.996 { 00:20:46.996 "name": "BaseBdev3", 00:20:46.996 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:46.996 "is_configured": true, 00:20:46.996 "data_offset": 2048, 00:20:46.996 "data_size": 63488 00:20:46.996 }, 00:20:46.996 { 00:20:46.996 "name": "BaseBdev4", 00:20:46.996 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:46.996 "is_configured": true, 00:20:46.996 "data_offset": 2048, 00:20:46.996 "data_size": 63488 00:20:46.996 } 00:20:46.996 ] 00:20:46.996 }' 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.996 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:47.564 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.565 [2024-11-27 14:20:17.993396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.565 BaseBdev1 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.565 14:20:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.565 [ 00:20:47.565 { 00:20:47.565 "name": "BaseBdev1", 00:20:47.565 "aliases": [ 00:20:47.565 "6506b8a4-ff41-4371-a84f-75aa29e1d9a2" 00:20:47.565 ], 00:20:47.565 "product_name": "Malloc disk", 00:20:47.565 "block_size": 512, 00:20:47.565 "num_blocks": 65536, 00:20:47.565 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:47.565 "assigned_rate_limits": { 00:20:47.565 "rw_ios_per_sec": 0, 00:20:47.565 "rw_mbytes_per_sec": 0, 00:20:47.565 "r_mbytes_per_sec": 0, 00:20:47.565 "w_mbytes_per_sec": 0 00:20:47.565 }, 00:20:47.565 "claimed": true, 00:20:47.565 "claim_type": "exclusive_write", 00:20:47.565 "zoned": false, 00:20:47.565 "supported_io_types": { 00:20:47.565 "read": true, 00:20:47.565 "write": true, 00:20:47.565 "unmap": true, 00:20:47.565 "flush": true, 00:20:47.565 "reset": true, 00:20:47.565 "nvme_admin": false, 00:20:47.565 "nvme_io": false, 00:20:47.565 "nvme_io_md": false, 00:20:47.565 "write_zeroes": true, 00:20:47.565 "zcopy": true, 00:20:47.565 "get_zone_info": false, 00:20:47.565 "zone_management": false, 00:20:47.565 "zone_append": false, 00:20:47.565 "compare": false, 00:20:47.565 "compare_and_write": false, 00:20:47.565 "abort": true, 00:20:47.565 "seek_hole": false, 00:20:47.565 "seek_data": false, 00:20:47.565 "copy": true, 00:20:47.565 "nvme_iov_md": false 00:20:47.565 }, 00:20:47.565 "memory_domains": [ 00:20:47.565 { 00:20:47.565 "dma_device_id": "system", 00:20:47.565 "dma_device_type": 1 00:20:47.565 }, 00:20:47.565 { 00:20:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.565 "dma_device_type": 2 00:20:47.565 } 00:20:47.565 ], 00:20:47.565 "driver_specific": {} 00:20:47.565 } 00:20:47.565 ] 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.565 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.824 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.824 "name": "Existed_Raid", 00:20:47.824 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:47.824 "strip_size_kb": 64, 00:20:47.824 "state": "configuring", 00:20:47.824 "raid_level": "raid5f", 00:20:47.824 "superblock": true, 00:20:47.824 "num_base_bdevs": 4, 00:20:47.824 "num_base_bdevs_discovered": 3, 00:20:47.824 "num_base_bdevs_operational": 4, 00:20:47.824 "base_bdevs_list": [ 00:20:47.824 { 00:20:47.824 "name": "BaseBdev1", 00:20:47.824 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:47.824 "is_configured": true, 00:20:47.824 "data_offset": 2048, 00:20:47.824 "data_size": 63488 00:20:47.824 }, 00:20:47.824 { 00:20:47.824 "name": null, 00:20:47.824 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:47.824 "is_configured": false, 00:20:47.824 "data_offset": 0, 00:20:47.824 "data_size": 63488 00:20:47.824 }, 00:20:47.824 { 00:20:47.824 "name": "BaseBdev3", 00:20:47.824 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:47.824 "is_configured": true, 00:20:47.824 "data_offset": 2048, 00:20:47.824 "data_size": 63488 00:20:47.824 }, 00:20:47.824 { 00:20:47.824 "name": "BaseBdev4", 00:20:47.824 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:47.824 "is_configured": true, 00:20:47.824 "data_offset": 2048, 00:20:47.824 "data_size": 63488 00:20:47.824 } 00:20:47.824 ] 00:20:47.824 }' 00:20:47.824 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.824 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.082 [2024-11-27 14:20:18.585661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.082 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.083 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:48.083 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.083 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.083 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.083 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.340 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.340 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.340 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.340 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.340 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.340 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.340 "name": "Existed_Raid", 00:20:48.340 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:48.340 "strip_size_kb": 64, 00:20:48.340 "state": "configuring", 00:20:48.340 "raid_level": "raid5f", 00:20:48.340 "superblock": true, 00:20:48.340 "num_base_bdevs": 4, 00:20:48.340 "num_base_bdevs_discovered": 2, 00:20:48.340 "num_base_bdevs_operational": 4, 00:20:48.340 "base_bdevs_list": [ 00:20:48.340 { 00:20:48.340 "name": "BaseBdev1", 00:20:48.340 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:48.340 "is_configured": true, 00:20:48.340 "data_offset": 2048, 00:20:48.340 "data_size": 63488 00:20:48.340 }, 00:20:48.340 { 00:20:48.340 "name": null, 00:20:48.340 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:48.340 "is_configured": false, 00:20:48.340 "data_offset": 0, 00:20:48.340 "data_size": 63488 00:20:48.340 }, 00:20:48.340 { 00:20:48.341 "name": null, 00:20:48.341 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:48.341 "is_configured": false, 00:20:48.341 "data_offset": 0, 00:20:48.341 "data_size": 63488 00:20:48.341 }, 00:20:48.341 { 00:20:48.341 "name": "BaseBdev4", 00:20:48.341 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:48.341 "is_configured": true, 00:20:48.341 "data_offset": 2048, 00:20:48.341 "data_size": 63488 00:20:48.341 } 00:20:48.341 ] 00:20:48.341 }' 00:20:48.341 14:20:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.341 14:20:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:48.906 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.907 [2024-11-27 14:20:19.165810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.907 "name": "Existed_Raid", 00:20:48.907 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:48.907 "strip_size_kb": 64, 00:20:48.907 "state": "configuring", 00:20:48.907 "raid_level": "raid5f", 00:20:48.907 "superblock": true, 00:20:48.907 "num_base_bdevs": 4, 00:20:48.907 "num_base_bdevs_discovered": 3, 00:20:48.907 "num_base_bdevs_operational": 4, 00:20:48.907 "base_bdevs_list": [ 00:20:48.907 { 00:20:48.907 "name": "BaseBdev1", 00:20:48.907 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:48.907 "is_configured": true, 00:20:48.907 "data_offset": 2048, 00:20:48.907 "data_size": 63488 00:20:48.907 }, 00:20:48.907 { 00:20:48.907 "name": null, 00:20:48.907 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:48.907 "is_configured": false, 00:20:48.907 "data_offset": 0, 00:20:48.907 "data_size": 63488 00:20:48.907 }, 00:20:48.907 { 00:20:48.907 "name": "BaseBdev3", 00:20:48.907 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:48.907 "is_configured": true, 00:20:48.907 "data_offset": 2048, 00:20:48.907 "data_size": 63488 00:20:48.907 }, 00:20:48.907 { 00:20:48.907 "name": "BaseBdev4", 00:20:48.907 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:48.907 "is_configured": true, 00:20:48.907 "data_offset": 2048, 00:20:48.907 "data_size": 63488 00:20:48.907 } 00:20:48.907 ] 00:20:48.907 }' 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.907 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.165 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.165 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 [2024-11-27 14:20:19.730001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.424 "name": "Existed_Raid", 00:20:49.424 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:49.424 "strip_size_kb": 64, 00:20:49.424 "state": "configuring", 00:20:49.424 "raid_level": "raid5f", 00:20:49.424 "superblock": true, 00:20:49.424 "num_base_bdevs": 4, 00:20:49.424 "num_base_bdevs_discovered": 2, 00:20:49.424 "num_base_bdevs_operational": 4, 00:20:49.424 "base_bdevs_list": [ 00:20:49.424 { 00:20:49.424 "name": null, 00:20:49.424 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:49.424 "is_configured": false, 00:20:49.424 "data_offset": 0, 00:20:49.424 "data_size": 63488 00:20:49.424 }, 00:20:49.424 { 00:20:49.424 "name": null, 00:20:49.424 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:49.424 "is_configured": false, 00:20:49.424 "data_offset": 0, 00:20:49.424 "data_size": 63488 00:20:49.424 }, 00:20:49.424 { 00:20:49.424 "name": "BaseBdev3", 00:20:49.424 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:49.424 "is_configured": true, 00:20:49.424 "data_offset": 2048, 00:20:49.424 "data_size": 63488 00:20:49.424 }, 00:20:49.424 { 00:20:49.424 "name": "BaseBdev4", 00:20:49.424 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:49.424 "is_configured": true, 00:20:49.424 "data_offset": 2048, 00:20:49.424 "data_size": 63488 00:20:49.424 } 00:20:49.424 ] 00:20:49.424 }' 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.424 14:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.992 [2024-11-27 14:20:20.400310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.992 "name": "Existed_Raid", 00:20:49.992 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:49.992 "strip_size_kb": 64, 00:20:49.992 "state": "configuring", 00:20:49.992 "raid_level": "raid5f", 00:20:49.992 "superblock": true, 00:20:49.992 "num_base_bdevs": 4, 00:20:49.992 "num_base_bdevs_discovered": 3, 00:20:49.992 "num_base_bdevs_operational": 4, 00:20:49.992 "base_bdevs_list": [ 00:20:49.992 { 00:20:49.992 "name": null, 00:20:49.992 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:49.992 "is_configured": false, 00:20:49.992 "data_offset": 0, 00:20:49.992 "data_size": 63488 00:20:49.992 }, 00:20:49.992 { 00:20:49.992 "name": "BaseBdev2", 00:20:49.992 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:49.992 "is_configured": true, 00:20:49.992 "data_offset": 2048, 00:20:49.992 "data_size": 63488 00:20:49.992 }, 00:20:49.992 { 00:20:49.992 "name": "BaseBdev3", 00:20:49.992 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:49.992 "is_configured": true, 00:20:49.992 "data_offset": 2048, 00:20:49.992 "data_size": 63488 00:20:49.992 }, 00:20:49.992 { 00:20:49.992 "name": "BaseBdev4", 00:20:49.992 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:49.992 "is_configured": true, 00:20:49.992 "data_offset": 2048, 00:20:49.992 "data_size": 63488 00:20:49.992 } 00:20:49.992 ] 00:20:49.992 }' 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.992 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:50.559 14:20:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.559 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6506b8a4-ff41-4371-a84f-75aa29e1d9a2 00:20:50.559 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.559 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.559 [2024-11-27 14:20:21.068971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:50.559 [2024-11-27 14:20:21.069339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:50.559 [2024-11-27 14:20:21.069359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:50.559 NewBaseBdev 00:20:50.559 [2024-11-27 14:20:21.069681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.817 [2024-11-27 14:20:21.076419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:50.817 [2024-11-27 14:20:21.076451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:50.817 [2024-11-27 14:20:21.076904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.817 [ 00:20:50.817 { 00:20:50.817 "name": "NewBaseBdev", 00:20:50.817 "aliases": [ 00:20:50.817 "6506b8a4-ff41-4371-a84f-75aa29e1d9a2" 00:20:50.817 ], 00:20:50.817 "product_name": "Malloc disk", 00:20:50.817 "block_size": 512, 00:20:50.817 "num_blocks": 65536, 00:20:50.817 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:50.817 "assigned_rate_limits": { 00:20:50.817 "rw_ios_per_sec": 0, 00:20:50.817 "rw_mbytes_per_sec": 0, 00:20:50.817 "r_mbytes_per_sec": 0, 00:20:50.817 "w_mbytes_per_sec": 0 00:20:50.817 }, 00:20:50.817 "claimed": true, 00:20:50.817 "claim_type": "exclusive_write", 00:20:50.817 "zoned": false, 00:20:50.817 "supported_io_types": { 00:20:50.817 "read": true, 00:20:50.817 "write": true, 00:20:50.817 "unmap": true, 00:20:50.817 "flush": true, 00:20:50.817 "reset": true, 00:20:50.817 "nvme_admin": false, 00:20:50.817 "nvme_io": false, 00:20:50.817 "nvme_io_md": false, 00:20:50.817 "write_zeroes": true, 00:20:50.817 "zcopy": true, 00:20:50.817 "get_zone_info": false, 00:20:50.817 "zone_management": false, 00:20:50.817 "zone_append": false, 00:20:50.817 "compare": false, 00:20:50.817 "compare_and_write": false, 00:20:50.817 "abort": true, 00:20:50.817 "seek_hole": false, 00:20:50.817 "seek_data": false, 00:20:50.817 "copy": true, 00:20:50.817 "nvme_iov_md": false 00:20:50.817 }, 00:20:50.817 "memory_domains": [ 00:20:50.817 { 00:20:50.817 "dma_device_id": "system", 00:20:50.817 "dma_device_type": 1 00:20:50.817 }, 00:20:50.817 { 00:20:50.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.817 "dma_device_type": 2 00:20:50.817 } 00:20:50.817 ], 00:20:50.817 "driver_specific": {} 00:20:50.817 } 00:20:50.817 ] 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.817 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.817 "name": "Existed_Raid", 00:20:50.817 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:50.817 "strip_size_kb": 64, 00:20:50.817 "state": "online", 00:20:50.817 "raid_level": "raid5f", 00:20:50.817 "superblock": true, 00:20:50.817 "num_base_bdevs": 4, 00:20:50.817 "num_base_bdevs_discovered": 4, 00:20:50.817 "num_base_bdevs_operational": 4, 00:20:50.817 "base_bdevs_list": [ 00:20:50.817 { 00:20:50.817 "name": "NewBaseBdev", 00:20:50.817 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:50.817 "is_configured": true, 00:20:50.817 "data_offset": 2048, 00:20:50.817 "data_size": 63488 00:20:50.817 }, 00:20:50.817 { 00:20:50.818 "name": "BaseBdev2", 00:20:50.818 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:50.818 "is_configured": true, 00:20:50.818 "data_offset": 2048, 00:20:50.818 "data_size": 63488 00:20:50.818 }, 00:20:50.818 { 00:20:50.818 "name": "BaseBdev3", 00:20:50.818 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:50.818 "is_configured": true, 00:20:50.818 "data_offset": 2048, 00:20:50.818 "data_size": 63488 00:20:50.818 }, 00:20:50.818 { 00:20:50.818 "name": "BaseBdev4", 00:20:50.818 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:50.818 "is_configured": true, 00:20:50.818 "data_offset": 2048, 00:20:50.818 "data_size": 63488 00:20:50.818 } 00:20:50.818 ] 00:20:50.818 }' 00:20:50.818 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.818 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:51.385 [2024-11-27 14:20:21.624999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.385 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.385 "name": "Existed_Raid", 00:20:51.385 "aliases": [ 00:20:51.385 "c27802d4-41ad-4486-be0c-22d3938f73ec" 00:20:51.385 ], 00:20:51.385 "product_name": "Raid Volume", 00:20:51.385 "block_size": 512, 00:20:51.385 "num_blocks": 190464, 00:20:51.385 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:51.385 "assigned_rate_limits": { 00:20:51.385 "rw_ios_per_sec": 0, 00:20:51.385 "rw_mbytes_per_sec": 0, 00:20:51.385 "r_mbytes_per_sec": 0, 00:20:51.385 "w_mbytes_per_sec": 0 00:20:51.385 }, 00:20:51.385 "claimed": false, 00:20:51.385 "zoned": false, 00:20:51.385 "supported_io_types": { 00:20:51.385 "read": true, 00:20:51.385 "write": true, 00:20:51.385 "unmap": false, 00:20:51.385 "flush": false, 00:20:51.385 "reset": true, 00:20:51.385 "nvme_admin": false, 00:20:51.385 "nvme_io": false, 00:20:51.385 "nvme_io_md": false, 00:20:51.385 "write_zeroes": true, 00:20:51.385 "zcopy": false, 00:20:51.385 "get_zone_info": false, 00:20:51.385 "zone_management": false, 00:20:51.385 "zone_append": false, 00:20:51.385 "compare": false, 00:20:51.385 "compare_and_write": false, 00:20:51.385 "abort": false, 00:20:51.385 "seek_hole": false, 00:20:51.385 "seek_data": false, 00:20:51.385 "copy": false, 00:20:51.385 "nvme_iov_md": false 00:20:51.385 }, 00:20:51.385 "driver_specific": { 00:20:51.385 "raid": { 00:20:51.385 "uuid": "c27802d4-41ad-4486-be0c-22d3938f73ec", 00:20:51.385 "strip_size_kb": 64, 00:20:51.385 "state": "online", 00:20:51.385 "raid_level": "raid5f", 00:20:51.385 "superblock": true, 00:20:51.385 "num_base_bdevs": 4, 00:20:51.385 "num_base_bdevs_discovered": 4, 00:20:51.385 "num_base_bdevs_operational": 4, 00:20:51.385 "base_bdevs_list": [ 00:20:51.385 { 00:20:51.385 "name": "NewBaseBdev", 00:20:51.385 "uuid": "6506b8a4-ff41-4371-a84f-75aa29e1d9a2", 00:20:51.385 "is_configured": true, 00:20:51.385 "data_offset": 2048, 00:20:51.385 "data_size": 63488 00:20:51.385 }, 00:20:51.385 { 00:20:51.385 "name": "BaseBdev2", 00:20:51.385 "uuid": "8309484a-b8c7-4299-8b18-2c480c60ab9a", 00:20:51.385 "is_configured": true, 00:20:51.385 "data_offset": 2048, 00:20:51.385 "data_size": 63488 00:20:51.385 }, 00:20:51.385 { 00:20:51.385 "name": "BaseBdev3", 00:20:51.386 "uuid": "bbd90330-f4f0-41f4-aa6c-702b084e639e", 00:20:51.386 "is_configured": true, 00:20:51.386 "data_offset": 2048, 00:20:51.386 "data_size": 63488 00:20:51.386 }, 00:20:51.386 { 00:20:51.386 "name": "BaseBdev4", 00:20:51.386 "uuid": "07b77d56-f632-41e1-afd4-b1e451b12309", 00:20:51.386 "is_configured": true, 00:20:51.386 "data_offset": 2048, 00:20:51.386 "data_size": 63488 00:20:51.386 } 00:20:51.386 ] 00:20:51.386 } 00:20:51.386 } 00:20:51.386 }' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:51.386 BaseBdev2 00:20:51.386 BaseBdev3 00:20:51.386 BaseBdev4' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.645 14:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [2024-11-27 14:20:22.004768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.645 [2024-11-27 14:20:22.004973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.645 [2024-11-27 14:20:22.005098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.645 [2024-11-27 14:20:22.005476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.645 [2024-11-27 14:20:22.005496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84081 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84081 ']' 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84081 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84081 00:20:51.645 killing process with pid 84081 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84081' 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84081 00:20:51.645 [2024-11-27 14:20:22.041561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.645 14:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84081 00:20:51.904 [2024-11-27 14:20:22.410282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.279 14:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:53.279 00:20:53.279 real 0m13.068s 00:20:53.280 user 0m21.601s 00:20:53.280 sys 0m1.859s 00:20:53.280 14:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.280 14:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.280 ************************************ 00:20:53.280 END TEST raid5f_state_function_test_sb 00:20:53.280 ************************************ 00:20:53.280 14:20:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:20:53.280 14:20:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:53.280 14:20:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.280 14:20:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:53.280 ************************************ 00:20:53.280 START TEST raid5f_superblock_test 00:20:53.280 ************************************ 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84763 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84763 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84763 ']' 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.280 14:20:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.280 [2024-11-27 14:20:23.656752] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:20:53.280 [2024-11-27 14:20:23.657223] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84763 ] 00:20:53.538 [2024-11-27 14:20:23.849545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.538 [2024-11-27 14:20:24.024718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.797 [2024-11-27 14:20:24.270950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.797 [2024-11-27 14:20:24.271019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 malloc1 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 [2024-11-27 14:20:24.693914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:54.365 [2024-11-27 14:20:24.694146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.365 [2024-11-27 14:20:24.694257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:54.365 [2024-11-27 14:20:24.694477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.365 [2024-11-27 14:20:24.697553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.365 [2024-11-27 14:20:24.697723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:54.365 pt1 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 malloc2 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.365 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.366 [2024-11-27 14:20:24.751285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:54.366 [2024-11-27 14:20:24.751489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.366 [2024-11-27 14:20:24.751607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:54.366 [2024-11-27 14:20:24.751798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.366 [2024-11-27 14:20:24.754942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.366 [2024-11-27 14:20:24.755113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:54.366 pt2 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.366 malloc3 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.366 [2024-11-27 14:20:24.816771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:54.366 [2024-11-27 14:20:24.816852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.366 [2024-11-27 14:20:24.816887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:54.366 [2024-11-27 14:20:24.816913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.366 [2024-11-27 14:20:24.819818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.366 [2024-11-27 14:20:24.820019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:54.366 pt3 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.366 malloc4 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.366 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.366 [2024-11-27 14:20:24.875410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:54.366 [2024-11-27 14:20:24.875524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.366 [2024-11-27 14:20:24.875568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:54.366 [2024-11-27 14:20:24.875589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.625 [2024-11-27 14:20:24.878492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.625 [2024-11-27 14:20:24.878543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:54.625 pt4 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.625 [2024-11-27 14:20:24.883465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:54.625 [2024-11-27 14:20:24.886206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:54.625 [2024-11-27 14:20:24.886331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:54.625 [2024-11-27 14:20:24.886405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:54.625 [2024-11-27 14:20:24.886678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:54.625 [2024-11-27 14:20:24.886703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:54.625 [2024-11-27 14:20:24.887178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:54.625 [2024-11-27 14:20:24.894226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:54.625 [2024-11-27 14:20:24.894369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:54.625 [2024-11-27 14:20:24.894802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.625 "name": "raid_bdev1", 00:20:54.625 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:54.625 "strip_size_kb": 64, 00:20:54.625 "state": "online", 00:20:54.625 "raid_level": "raid5f", 00:20:54.625 "superblock": true, 00:20:54.625 "num_base_bdevs": 4, 00:20:54.625 "num_base_bdevs_discovered": 4, 00:20:54.625 "num_base_bdevs_operational": 4, 00:20:54.625 "base_bdevs_list": [ 00:20:54.625 { 00:20:54.625 "name": "pt1", 00:20:54.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.625 "is_configured": true, 00:20:54.625 "data_offset": 2048, 00:20:54.625 "data_size": 63488 00:20:54.625 }, 00:20:54.625 { 00:20:54.625 "name": "pt2", 00:20:54.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.625 "is_configured": true, 00:20:54.625 "data_offset": 2048, 00:20:54.625 "data_size": 63488 00:20:54.625 }, 00:20:54.625 { 00:20:54.625 "name": "pt3", 00:20:54.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.625 "is_configured": true, 00:20:54.625 "data_offset": 2048, 00:20:54.625 "data_size": 63488 00:20:54.625 }, 00:20:54.625 { 00:20:54.625 "name": "pt4", 00:20:54.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:54.625 "is_configured": true, 00:20:54.625 "data_offset": 2048, 00:20:54.625 "data_size": 63488 00:20:54.625 } 00:20:54.625 ] 00:20:54.625 }' 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.625 14:20:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.193 [2024-11-27 14:20:25.438905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.193 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.193 "name": "raid_bdev1", 00:20:55.193 "aliases": [ 00:20:55.193 "0efa7785-f708-41bf-a0d1-91af4ed83d57" 00:20:55.193 ], 00:20:55.193 "product_name": "Raid Volume", 00:20:55.193 "block_size": 512, 00:20:55.193 "num_blocks": 190464, 00:20:55.193 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:55.193 "assigned_rate_limits": { 00:20:55.193 "rw_ios_per_sec": 0, 00:20:55.193 "rw_mbytes_per_sec": 0, 00:20:55.193 "r_mbytes_per_sec": 0, 00:20:55.193 "w_mbytes_per_sec": 0 00:20:55.193 }, 00:20:55.193 "claimed": false, 00:20:55.193 "zoned": false, 00:20:55.193 "supported_io_types": { 00:20:55.193 "read": true, 00:20:55.193 "write": true, 00:20:55.193 "unmap": false, 00:20:55.193 "flush": false, 00:20:55.193 "reset": true, 00:20:55.193 "nvme_admin": false, 00:20:55.193 "nvme_io": false, 00:20:55.193 "nvme_io_md": false, 00:20:55.193 "write_zeroes": true, 00:20:55.193 "zcopy": false, 00:20:55.193 "get_zone_info": false, 00:20:55.193 "zone_management": false, 00:20:55.193 "zone_append": false, 00:20:55.193 "compare": false, 00:20:55.193 "compare_and_write": false, 00:20:55.193 "abort": false, 00:20:55.193 "seek_hole": false, 00:20:55.193 "seek_data": false, 00:20:55.193 "copy": false, 00:20:55.193 "nvme_iov_md": false 00:20:55.193 }, 00:20:55.193 "driver_specific": { 00:20:55.193 "raid": { 00:20:55.193 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:55.193 "strip_size_kb": 64, 00:20:55.193 "state": "online", 00:20:55.193 "raid_level": "raid5f", 00:20:55.193 "superblock": true, 00:20:55.193 "num_base_bdevs": 4, 00:20:55.193 "num_base_bdevs_discovered": 4, 00:20:55.193 "num_base_bdevs_operational": 4, 00:20:55.193 "base_bdevs_list": [ 00:20:55.193 { 00:20:55.193 "name": "pt1", 00:20:55.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.193 "is_configured": true, 00:20:55.193 "data_offset": 2048, 00:20:55.193 "data_size": 63488 00:20:55.193 }, 00:20:55.193 { 00:20:55.193 "name": "pt2", 00:20:55.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:55.193 "is_configured": true, 00:20:55.193 "data_offset": 2048, 00:20:55.193 "data_size": 63488 00:20:55.193 }, 00:20:55.193 { 00:20:55.193 "name": "pt3", 00:20:55.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:55.193 "is_configured": true, 00:20:55.194 "data_offset": 2048, 00:20:55.194 "data_size": 63488 00:20:55.194 }, 00:20:55.194 { 00:20:55.194 "name": "pt4", 00:20:55.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:55.194 "is_configured": true, 00:20:55.194 "data_offset": 2048, 00:20:55.194 "data_size": 63488 00:20:55.194 } 00:20:55.194 ] 00:20:55.194 } 00:20:55.194 } 00:20:55.194 }' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:55.194 pt2 00:20:55.194 pt3 00:20:55.194 pt4' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.194 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 [2024-11-27 14:20:25.810949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0efa7785-f708-41bf-a0d1-91af4ed83d57 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0efa7785-f708-41bf-a0d1-91af4ed83d57 ']' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 [2024-11-27 14:20:25.870775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:55.453 [2024-11-27 14:20:25.870808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:55.453 [2024-11-27 14:20:25.870935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:55.453 [2024-11-27 14:20:25.871048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:55.453 [2024-11-27 14:20:25.871073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.453 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.713 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:55.713 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.713 14:20:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:55.713 14:20:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 [2024-11-27 14:20:26.034848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:55.713 [2024-11-27 14:20:26.037551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:55.713 [2024-11-27 14:20:26.037621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:55.713 [2024-11-27 14:20:26.037676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:55.713 [2024-11-27 14:20:26.037751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:55.713 [2024-11-27 14:20:26.037853] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:55.713 [2024-11-27 14:20:26.037894] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:55.713 [2024-11-27 14:20:26.037928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:55.713 [2024-11-27 14:20:26.037952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:55.713 [2024-11-27 14:20:26.037969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:55.713 request: 00:20:55.713 { 00:20:55.713 "name": "raid_bdev1", 00:20:55.713 "raid_level": "raid5f", 00:20:55.713 "base_bdevs": [ 00:20:55.713 "malloc1", 00:20:55.713 "malloc2", 00:20:55.713 "malloc3", 00:20:55.713 "malloc4" 00:20:55.713 ], 00:20:55.713 "strip_size_kb": 64, 00:20:55.713 "superblock": false, 00:20:55.713 "method": "bdev_raid_create", 00:20:55.713 "req_id": 1 00:20:55.713 } 00:20:55.713 Got JSON-RPC error response 00:20:55.713 response: 00:20:55.713 { 00:20:55.713 "code": -17, 00:20:55.713 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:55.713 } 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.713 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.713 [2024-11-27 14:20:26.094812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:55.713 [2024-11-27 14:20:26.095022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.713 [2024-11-27 14:20:26.095093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:55.713 [2024-11-27 14:20:26.095213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.713 [2024-11-27 14:20:26.098202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.713 [2024-11-27 14:20:26.098371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:55.713 [2024-11-27 14:20:26.098573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:55.714 [2024-11-27 14:20:26.098753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:55.714 pt1 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.714 "name": "raid_bdev1", 00:20:55.714 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:55.714 "strip_size_kb": 64, 00:20:55.714 "state": "configuring", 00:20:55.714 "raid_level": "raid5f", 00:20:55.714 "superblock": true, 00:20:55.714 "num_base_bdevs": 4, 00:20:55.714 "num_base_bdevs_discovered": 1, 00:20:55.714 "num_base_bdevs_operational": 4, 00:20:55.714 "base_bdevs_list": [ 00:20:55.714 { 00:20:55.714 "name": "pt1", 00:20:55.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.714 "is_configured": true, 00:20:55.714 "data_offset": 2048, 00:20:55.714 "data_size": 63488 00:20:55.714 }, 00:20:55.714 { 00:20:55.714 "name": null, 00:20:55.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:55.714 "is_configured": false, 00:20:55.714 "data_offset": 2048, 00:20:55.714 "data_size": 63488 00:20:55.714 }, 00:20:55.714 { 00:20:55.714 "name": null, 00:20:55.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:55.714 "is_configured": false, 00:20:55.714 "data_offset": 2048, 00:20:55.714 "data_size": 63488 00:20:55.714 }, 00:20:55.714 { 00:20:55.714 "name": null, 00:20:55.714 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:55.714 "is_configured": false, 00:20:55.714 "data_offset": 2048, 00:20:55.714 "data_size": 63488 00:20:55.714 } 00:20:55.714 ] 00:20:55.714 }' 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.714 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.279 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:20:56.279 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:56.279 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.279 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.279 [2024-11-27 14:20:26.603221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:56.279 [2024-11-27 14:20:26.603316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.279 [2024-11-27 14:20:26.603347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:56.279 [2024-11-27 14:20:26.603364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.279 [2024-11-27 14:20:26.603976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.279 [2024-11-27 14:20:26.604018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:56.280 [2024-11-27 14:20:26.604124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:56.280 [2024-11-27 14:20:26.604163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:56.280 pt2 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.280 [2024-11-27 14:20:26.611206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.280 "name": "raid_bdev1", 00:20:56.280 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:56.280 "strip_size_kb": 64, 00:20:56.280 "state": "configuring", 00:20:56.280 "raid_level": "raid5f", 00:20:56.280 "superblock": true, 00:20:56.280 "num_base_bdevs": 4, 00:20:56.280 "num_base_bdevs_discovered": 1, 00:20:56.280 "num_base_bdevs_operational": 4, 00:20:56.280 "base_bdevs_list": [ 00:20:56.280 { 00:20:56.280 "name": "pt1", 00:20:56.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:56.280 "is_configured": true, 00:20:56.280 "data_offset": 2048, 00:20:56.280 "data_size": 63488 00:20:56.280 }, 00:20:56.280 { 00:20:56.280 "name": null, 00:20:56.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:56.280 "is_configured": false, 00:20:56.280 "data_offset": 0, 00:20:56.280 "data_size": 63488 00:20:56.280 }, 00:20:56.280 { 00:20:56.280 "name": null, 00:20:56.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:56.280 "is_configured": false, 00:20:56.280 "data_offset": 2048, 00:20:56.280 "data_size": 63488 00:20:56.280 }, 00:20:56.280 { 00:20:56.280 "name": null, 00:20:56.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:56.280 "is_configured": false, 00:20:56.280 "data_offset": 2048, 00:20:56.280 "data_size": 63488 00:20:56.280 } 00:20:56.280 ] 00:20:56.280 }' 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.280 14:20:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.848 [2024-11-27 14:20:27.119338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:56.848 [2024-11-27 14:20:27.119554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.848 [2024-11-27 14:20:27.119596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:56.848 [2024-11-27 14:20:27.119612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.848 [2024-11-27 14:20:27.120191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.848 [2024-11-27 14:20:27.120217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:56.848 [2024-11-27 14:20:27.120326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:56.848 [2024-11-27 14:20:27.120373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:56.848 pt2 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.848 [2024-11-27 14:20:27.127320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:56.848 [2024-11-27 14:20:27.127378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.848 [2024-11-27 14:20:27.127413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:56.848 [2024-11-27 14:20:27.127429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.848 [2024-11-27 14:20:27.127879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.848 [2024-11-27 14:20:27.127909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:56.848 [2024-11-27 14:20:27.127989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:56.848 [2024-11-27 14:20:27.128024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:56.848 pt3 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.848 [2024-11-27 14:20:27.135285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:56.848 [2024-11-27 14:20:27.135336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.848 [2024-11-27 14:20:27.135363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:56.848 [2024-11-27 14:20:27.135377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.848 [2024-11-27 14:20:27.135853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.848 [2024-11-27 14:20:27.135891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:56.848 [2024-11-27 14:20:27.135973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:56.848 [2024-11-27 14:20:27.136007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:56.848 [2024-11-27 14:20:27.136192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:56.848 [2024-11-27 14:20:27.136213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:56.848 [2024-11-27 14:20:27.136520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:56.848 [2024-11-27 14:20:27.143091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:56.848 [2024-11-27 14:20:27.143122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:56.848 [2024-11-27 14:20:27.143358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.848 pt4 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.848 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.849 "name": "raid_bdev1", 00:20:56.849 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:56.849 "strip_size_kb": 64, 00:20:56.849 "state": "online", 00:20:56.849 "raid_level": "raid5f", 00:20:56.849 "superblock": true, 00:20:56.849 "num_base_bdevs": 4, 00:20:56.849 "num_base_bdevs_discovered": 4, 00:20:56.849 "num_base_bdevs_operational": 4, 00:20:56.849 "base_bdevs_list": [ 00:20:56.849 { 00:20:56.849 "name": "pt1", 00:20:56.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:56.849 "is_configured": true, 00:20:56.849 "data_offset": 2048, 00:20:56.849 "data_size": 63488 00:20:56.849 }, 00:20:56.849 { 00:20:56.849 "name": "pt2", 00:20:56.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:56.849 "is_configured": true, 00:20:56.849 "data_offset": 2048, 00:20:56.849 "data_size": 63488 00:20:56.849 }, 00:20:56.849 { 00:20:56.849 "name": "pt3", 00:20:56.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:56.849 "is_configured": true, 00:20:56.849 "data_offset": 2048, 00:20:56.849 "data_size": 63488 00:20:56.849 }, 00:20:56.849 { 00:20:56.849 "name": "pt4", 00:20:56.849 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:56.849 "is_configured": true, 00:20:56.849 "data_offset": 2048, 00:20:56.849 "data_size": 63488 00:20:56.849 } 00:20:56.849 ] 00:20:56.849 }' 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.849 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.416 [2024-11-27 14:20:27.647134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.416 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.416 "name": "raid_bdev1", 00:20:57.416 "aliases": [ 00:20:57.416 "0efa7785-f708-41bf-a0d1-91af4ed83d57" 00:20:57.416 ], 00:20:57.416 "product_name": "Raid Volume", 00:20:57.416 "block_size": 512, 00:20:57.416 "num_blocks": 190464, 00:20:57.416 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:57.416 "assigned_rate_limits": { 00:20:57.416 "rw_ios_per_sec": 0, 00:20:57.416 "rw_mbytes_per_sec": 0, 00:20:57.416 "r_mbytes_per_sec": 0, 00:20:57.416 "w_mbytes_per_sec": 0 00:20:57.416 }, 00:20:57.416 "claimed": false, 00:20:57.416 "zoned": false, 00:20:57.416 "supported_io_types": { 00:20:57.416 "read": true, 00:20:57.416 "write": true, 00:20:57.416 "unmap": false, 00:20:57.416 "flush": false, 00:20:57.416 "reset": true, 00:20:57.416 "nvme_admin": false, 00:20:57.416 "nvme_io": false, 00:20:57.416 "nvme_io_md": false, 00:20:57.416 "write_zeroes": true, 00:20:57.416 "zcopy": false, 00:20:57.416 "get_zone_info": false, 00:20:57.416 "zone_management": false, 00:20:57.416 "zone_append": false, 00:20:57.416 "compare": false, 00:20:57.416 "compare_and_write": false, 00:20:57.416 "abort": false, 00:20:57.416 "seek_hole": false, 00:20:57.417 "seek_data": false, 00:20:57.417 "copy": false, 00:20:57.417 "nvme_iov_md": false 00:20:57.417 }, 00:20:57.417 "driver_specific": { 00:20:57.417 "raid": { 00:20:57.417 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:57.417 "strip_size_kb": 64, 00:20:57.417 "state": "online", 00:20:57.417 "raid_level": "raid5f", 00:20:57.417 "superblock": true, 00:20:57.417 "num_base_bdevs": 4, 00:20:57.417 "num_base_bdevs_discovered": 4, 00:20:57.417 "num_base_bdevs_operational": 4, 00:20:57.417 "base_bdevs_list": [ 00:20:57.417 { 00:20:57.417 "name": "pt1", 00:20:57.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:57.417 "is_configured": true, 00:20:57.417 "data_offset": 2048, 00:20:57.417 "data_size": 63488 00:20:57.417 }, 00:20:57.417 { 00:20:57.417 "name": "pt2", 00:20:57.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:57.417 "is_configured": true, 00:20:57.417 "data_offset": 2048, 00:20:57.417 "data_size": 63488 00:20:57.417 }, 00:20:57.417 { 00:20:57.417 "name": "pt3", 00:20:57.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:57.417 "is_configured": true, 00:20:57.417 "data_offset": 2048, 00:20:57.417 "data_size": 63488 00:20:57.417 }, 00:20:57.417 { 00:20:57.417 "name": "pt4", 00:20:57.417 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:57.417 "is_configured": true, 00:20:57.417 "data_offset": 2048, 00:20:57.417 "data_size": 63488 00:20:57.417 } 00:20:57.417 ] 00:20:57.417 } 00:20:57.417 } 00:20:57.417 }' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:57.417 pt2 00:20:57.417 pt3 00:20:57.417 pt4' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.417 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.674 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.674 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:57.674 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:57.675 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:57.675 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.675 14:20:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:57.675 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.675 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.675 14:20:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.675 [2024-11-27 14:20:28.015146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0efa7785-f708-41bf-a0d1-91af4ed83d57 '!=' 0efa7785-f708-41bf-a0d1-91af4ed83d57 ']' 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.675 [2024-11-27 14:20:28.062986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.675 "name": "raid_bdev1", 00:20:57.675 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:57.675 "strip_size_kb": 64, 00:20:57.675 "state": "online", 00:20:57.675 "raid_level": "raid5f", 00:20:57.675 "superblock": true, 00:20:57.675 "num_base_bdevs": 4, 00:20:57.675 "num_base_bdevs_discovered": 3, 00:20:57.675 "num_base_bdevs_operational": 3, 00:20:57.675 "base_bdevs_list": [ 00:20:57.675 { 00:20:57.675 "name": null, 00:20:57.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.675 "is_configured": false, 00:20:57.675 "data_offset": 0, 00:20:57.675 "data_size": 63488 00:20:57.675 }, 00:20:57.675 { 00:20:57.675 "name": "pt2", 00:20:57.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:57.675 "is_configured": true, 00:20:57.675 "data_offset": 2048, 00:20:57.675 "data_size": 63488 00:20:57.675 }, 00:20:57.675 { 00:20:57.675 "name": "pt3", 00:20:57.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:57.675 "is_configured": true, 00:20:57.675 "data_offset": 2048, 00:20:57.675 "data_size": 63488 00:20:57.675 }, 00:20:57.675 { 00:20:57.675 "name": "pt4", 00:20:57.675 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:57.675 "is_configured": true, 00:20:57.675 "data_offset": 2048, 00:20:57.675 "data_size": 63488 00:20:57.675 } 00:20:57.675 ] 00:20:57.675 }' 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.675 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.239 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:58.239 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 [2024-11-27 14:20:28.575212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.240 [2024-11-27 14:20:28.575408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.240 [2024-11-27 14:20:28.575626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.240 [2024-11-27 14:20:28.575896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.240 [2024-11-27 14:20:28.576059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 [2024-11-27 14:20:28.663205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:58.240 [2024-11-27 14:20:28.663419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.240 [2024-11-27 14:20:28.663463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:58.240 [2024-11-27 14:20:28.663480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.240 [2024-11-27 14:20:28.666609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.240 [2024-11-27 14:20:28.666768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:58.240 [2024-11-27 14:20:28.666908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:58.240 [2024-11-27 14:20:28.666973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:58.240 pt2 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.240 "name": "raid_bdev1", 00:20:58.240 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:58.240 "strip_size_kb": 64, 00:20:58.240 "state": "configuring", 00:20:58.240 "raid_level": "raid5f", 00:20:58.240 "superblock": true, 00:20:58.240 "num_base_bdevs": 4, 00:20:58.240 "num_base_bdevs_discovered": 1, 00:20:58.240 "num_base_bdevs_operational": 3, 00:20:58.240 "base_bdevs_list": [ 00:20:58.240 { 00:20:58.240 "name": null, 00:20:58.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.240 "is_configured": false, 00:20:58.240 "data_offset": 2048, 00:20:58.240 "data_size": 63488 00:20:58.240 }, 00:20:58.240 { 00:20:58.240 "name": "pt2", 00:20:58.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:58.240 "is_configured": true, 00:20:58.240 "data_offset": 2048, 00:20:58.240 "data_size": 63488 00:20:58.240 }, 00:20:58.240 { 00:20:58.240 "name": null, 00:20:58.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:58.240 "is_configured": false, 00:20:58.240 "data_offset": 2048, 00:20:58.240 "data_size": 63488 00:20:58.240 }, 00:20:58.240 { 00:20:58.240 "name": null, 00:20:58.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:58.240 "is_configured": false, 00:20:58.240 "data_offset": 2048, 00:20:58.240 "data_size": 63488 00:20:58.240 } 00:20:58.240 ] 00:20:58.240 }' 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.240 14:20:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.804 [2024-11-27 14:20:29.203417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:58.804 [2024-11-27 14:20:29.203544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.804 [2024-11-27 14:20:29.203591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:58.804 [2024-11-27 14:20:29.203611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.804 [2024-11-27 14:20:29.204243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.804 [2024-11-27 14:20:29.204284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:58.804 [2024-11-27 14:20:29.204399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:58.804 [2024-11-27 14:20:29.204433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:58.804 pt3 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.804 "name": "raid_bdev1", 00:20:58.804 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:58.804 "strip_size_kb": 64, 00:20:58.804 "state": "configuring", 00:20:58.804 "raid_level": "raid5f", 00:20:58.804 "superblock": true, 00:20:58.804 "num_base_bdevs": 4, 00:20:58.804 "num_base_bdevs_discovered": 2, 00:20:58.804 "num_base_bdevs_operational": 3, 00:20:58.804 "base_bdevs_list": [ 00:20:58.804 { 00:20:58.804 "name": null, 00:20:58.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.804 "is_configured": false, 00:20:58.804 "data_offset": 2048, 00:20:58.804 "data_size": 63488 00:20:58.804 }, 00:20:58.804 { 00:20:58.804 "name": "pt2", 00:20:58.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:58.804 "is_configured": true, 00:20:58.804 "data_offset": 2048, 00:20:58.804 "data_size": 63488 00:20:58.804 }, 00:20:58.804 { 00:20:58.804 "name": "pt3", 00:20:58.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:58.804 "is_configured": true, 00:20:58.804 "data_offset": 2048, 00:20:58.804 "data_size": 63488 00:20:58.804 }, 00:20:58.804 { 00:20:58.804 "name": null, 00:20:58.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:58.804 "is_configured": false, 00:20:58.804 "data_offset": 2048, 00:20:58.804 "data_size": 63488 00:20:58.804 } 00:20:58.804 ] 00:20:58.804 }' 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.804 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.369 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:59.369 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:59.369 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:20:59.369 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:59.369 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.369 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.369 [2024-11-27 14:20:29.715540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:59.369 [2024-11-27 14:20:29.715632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.369 [2024-11-27 14:20:29.715667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:59.369 [2024-11-27 14:20:29.715683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.369 [2024-11-27 14:20:29.716270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.369 [2024-11-27 14:20:29.716296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:59.369 [2024-11-27 14:20:29.716404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:59.369 [2024-11-27 14:20:29.716445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:59.369 [2024-11-27 14:20:29.716614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:59.369 [2024-11-27 14:20:29.716630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:59.370 [2024-11-27 14:20:29.716957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:59.370 [2024-11-27 14:20:29.723344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:59.370 [2024-11-27 14:20:29.723377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:59.370 [2024-11-27 14:20:29.723717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.370 pt4 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.370 "name": "raid_bdev1", 00:20:59.370 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:59.370 "strip_size_kb": 64, 00:20:59.370 "state": "online", 00:20:59.370 "raid_level": "raid5f", 00:20:59.370 "superblock": true, 00:20:59.370 "num_base_bdevs": 4, 00:20:59.370 "num_base_bdevs_discovered": 3, 00:20:59.370 "num_base_bdevs_operational": 3, 00:20:59.370 "base_bdevs_list": [ 00:20:59.370 { 00:20:59.370 "name": null, 00:20:59.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.370 "is_configured": false, 00:20:59.370 "data_offset": 2048, 00:20:59.370 "data_size": 63488 00:20:59.370 }, 00:20:59.370 { 00:20:59.370 "name": "pt2", 00:20:59.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.370 "is_configured": true, 00:20:59.370 "data_offset": 2048, 00:20:59.370 "data_size": 63488 00:20:59.370 }, 00:20:59.370 { 00:20:59.370 "name": "pt3", 00:20:59.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:59.370 "is_configured": true, 00:20:59.370 "data_offset": 2048, 00:20:59.370 "data_size": 63488 00:20:59.370 }, 00:20:59.370 { 00:20:59.370 "name": "pt4", 00:20:59.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:59.370 "is_configured": true, 00:20:59.370 "data_offset": 2048, 00:20:59.370 "data_size": 63488 00:20:59.370 } 00:20:59.370 ] 00:20:59.370 }' 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.370 14:20:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.948 [2024-11-27 14:20:30.255270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.948 [2024-11-27 14:20:30.255637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.948 [2024-11-27 14:20:30.255797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.948 [2024-11-27 14:20:30.255935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.948 [2024-11-27 14:20:30.255962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:59.948 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.949 [2024-11-27 14:20:30.331260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:59.949 [2024-11-27 14:20:30.331383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.949 [2024-11-27 14:20:30.331427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:59.949 [2024-11-27 14:20:30.331452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.949 [2024-11-27 14:20:30.334717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.949 [2024-11-27 14:20:30.334773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:59.949 [2024-11-27 14:20:30.334935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:59.949 [2024-11-27 14:20:30.335012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:59.949 [2024-11-27 14:20:30.335194] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:59.949 [2024-11-27 14:20:30.335354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.949 [2024-11-27 14:20:30.335387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:59.949 [2024-11-27 14:20:30.335474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:59.949 [2024-11-27 14:20:30.335706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:59.949 pt1 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.949 "name": "raid_bdev1", 00:20:59.949 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:20:59.949 "strip_size_kb": 64, 00:20:59.949 "state": "configuring", 00:20:59.949 "raid_level": "raid5f", 00:20:59.949 "superblock": true, 00:20:59.949 "num_base_bdevs": 4, 00:20:59.949 "num_base_bdevs_discovered": 2, 00:20:59.949 "num_base_bdevs_operational": 3, 00:20:59.949 "base_bdevs_list": [ 00:20:59.949 { 00:20:59.949 "name": null, 00:20:59.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.949 "is_configured": false, 00:20:59.949 "data_offset": 2048, 00:20:59.949 "data_size": 63488 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "name": "pt2", 00:20:59.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.949 "is_configured": true, 00:20:59.949 "data_offset": 2048, 00:20:59.949 "data_size": 63488 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "name": "pt3", 00:20:59.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:59.949 "is_configured": true, 00:20:59.949 "data_offset": 2048, 00:20:59.949 "data_size": 63488 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "name": null, 00:20:59.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:59.949 "is_configured": false, 00:20:59.949 "data_offset": 2048, 00:20:59.949 "data_size": 63488 00:20:59.949 } 00:20:59.949 ] 00:20:59.949 }' 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.949 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.546 [2024-11-27 14:20:30.867529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:00.546 [2024-11-27 14:20:30.867885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.546 [2024-11-27 14:20:30.867939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:00.546 [2024-11-27 14:20:30.867958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.546 [2024-11-27 14:20:30.868608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.546 [2024-11-27 14:20:30.868634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:00.546 [2024-11-27 14:20:30.868765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:00.546 [2024-11-27 14:20:30.868803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:00.546 [2024-11-27 14:20:30.869021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:00.546 [2024-11-27 14:20:30.869039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:00.546 [2024-11-27 14:20:30.869371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:00.546 [2024-11-27 14:20:30.876151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:00.546 [2024-11-27 14:20:30.876306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:00.546 [2024-11-27 14:20:30.876810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.546 pt4 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.546 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.547 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.547 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.547 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.547 "name": "raid_bdev1", 00:21:00.547 "uuid": "0efa7785-f708-41bf-a0d1-91af4ed83d57", 00:21:00.547 "strip_size_kb": 64, 00:21:00.547 "state": "online", 00:21:00.547 "raid_level": "raid5f", 00:21:00.547 "superblock": true, 00:21:00.547 "num_base_bdevs": 4, 00:21:00.547 "num_base_bdevs_discovered": 3, 00:21:00.547 "num_base_bdevs_operational": 3, 00:21:00.547 "base_bdevs_list": [ 00:21:00.547 { 00:21:00.547 "name": null, 00:21:00.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.547 "is_configured": false, 00:21:00.547 "data_offset": 2048, 00:21:00.547 "data_size": 63488 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "pt2", 00:21:00.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 2048, 00:21:00.547 "data_size": 63488 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "pt3", 00:21:00.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 2048, 00:21:00.547 "data_size": 63488 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "pt4", 00:21:00.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 2048, 00:21:00.547 "data_size": 63488 00:21:00.547 } 00:21:00.547 ] 00:21:00.547 }' 00:21:00.547 14:20:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.547 14:20:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.114 [2024-11-27 14:20:31.449379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0efa7785-f708-41bf-a0d1-91af4ed83d57 '!=' 0efa7785-f708-41bf-a0d1-91af4ed83d57 ']' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84763 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84763 ']' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84763 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84763 00:21:01.114 killing process with pid 84763 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84763' 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84763 00:21:01.114 14:20:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84763 00:21:01.114 [2024-11-27 14:20:31.519750] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:01.114 [2024-11-27 14:20:31.519941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.114 [2024-11-27 14:20:31.520064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.114 [2024-11-27 14:20:31.520109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:01.682 [2024-11-27 14:20:31.908630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:02.618 ************************************ 00:21:02.618 END TEST raid5f_superblock_test 00:21:02.618 ************************************ 00:21:02.618 14:20:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:02.618 00:21:02.618 real 0m9.517s 00:21:02.618 user 0m15.472s 00:21:02.618 sys 0m1.403s 00:21:02.618 14:20:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.618 14:20:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.618 14:20:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:02.618 14:20:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:21:02.618 14:20:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:02.618 14:20:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.618 14:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:02.618 ************************************ 00:21:02.619 START TEST raid5f_rebuild_test 00:21:02.619 ************************************ 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85254 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85254 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85254 ']' 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.619 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.878 [2024-11-27 14:20:33.233065] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:21:02.878 [2024-11-27 14:20:33.233481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85254 ] 00:21:02.879 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:02.879 Zero copy mechanism will not be used. 00:21:03.137 [2024-11-27 14:20:33.415608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.137 [2024-11-27 14:20:33.544267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.396 [2024-11-27 14:20:33.746667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:03.396 [2024-11-27 14:20:33.746746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:03.653 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.654 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:21:03.654 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:03.654 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:03.654 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.654 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.912 BaseBdev1_malloc 00:21:03.912 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.912 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:03.912 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.912 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.912 [2024-11-27 14:20:34.187041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:03.912 [2024-11-27 14:20:34.187118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.912 [2024-11-27 14:20:34.187151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:03.912 [2024-11-27 14:20:34.187173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.912 [2024-11-27 14:20:34.189990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.912 [2024-11-27 14:20:34.190208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:03.912 BaseBdev1 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 BaseBdev2_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 [2024-11-27 14:20:34.243311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:03.913 [2024-11-27 14:20:34.243391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.913 [2024-11-27 14:20:34.243424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:03.913 [2024-11-27 14:20:34.243443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.913 [2024-11-27 14:20:34.246232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.913 [2024-11-27 14:20:34.246283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:03.913 BaseBdev2 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 BaseBdev3_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 [2024-11-27 14:20:34.311248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:03.913 [2024-11-27 14:20:34.311320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.913 [2024-11-27 14:20:34.311354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:03.913 [2024-11-27 14:20:34.311374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.913 [2024-11-27 14:20:34.314220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.913 [2024-11-27 14:20:34.314407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:03.913 BaseBdev3 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 BaseBdev4_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 [2024-11-27 14:20:34.363261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:03.913 [2024-11-27 14:20:34.363339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.913 [2024-11-27 14:20:34.363371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:03.913 [2024-11-27 14:20:34.363390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.913 [2024-11-27 14:20:34.366167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.913 [2024-11-27 14:20:34.366222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:03.913 BaseBdev4 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 spare_malloc 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.913 spare_delay 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.913 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.171 [2024-11-27 14:20:34.423295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:04.171 [2024-11-27 14:20:34.423364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.171 [2024-11-27 14:20:34.423392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:04.171 [2024-11-27 14:20:34.423410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.171 [2024-11-27 14:20:34.426175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.171 [2024-11-27 14:20:34.426228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:04.171 spare 00:21:04.171 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.171 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:04.171 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.171 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.171 [2024-11-27 14:20:34.435349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.171 [2024-11-27 14:20:34.437724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.172 [2024-11-27 14:20:34.437975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:04.172 [2024-11-27 14:20:34.438076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:04.172 [2024-11-27 14:20:34.438221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:04.172 [2024-11-27 14:20:34.438243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:04.172 [2024-11-27 14:20:34.438580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:04.172 [2024-11-27 14:20:34.445415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:04.172 [2024-11-27 14:20:34.445550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:04.172 [2024-11-27 14:20:34.446055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.172 "name": "raid_bdev1", 00:21:04.172 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:04.172 "strip_size_kb": 64, 00:21:04.172 "state": "online", 00:21:04.172 "raid_level": "raid5f", 00:21:04.172 "superblock": false, 00:21:04.172 "num_base_bdevs": 4, 00:21:04.172 "num_base_bdevs_discovered": 4, 00:21:04.172 "num_base_bdevs_operational": 4, 00:21:04.172 "base_bdevs_list": [ 00:21:04.172 { 00:21:04.172 "name": "BaseBdev1", 00:21:04.172 "uuid": "5d1ba218-64e3-558c-b330-6d885cd7607e", 00:21:04.172 "is_configured": true, 00:21:04.172 "data_offset": 0, 00:21:04.172 "data_size": 65536 00:21:04.172 }, 00:21:04.172 { 00:21:04.172 "name": "BaseBdev2", 00:21:04.172 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:04.172 "is_configured": true, 00:21:04.172 "data_offset": 0, 00:21:04.172 "data_size": 65536 00:21:04.172 }, 00:21:04.172 { 00:21:04.172 "name": "BaseBdev3", 00:21:04.172 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:04.172 "is_configured": true, 00:21:04.172 "data_offset": 0, 00:21:04.172 "data_size": 65536 00:21:04.172 }, 00:21:04.172 { 00:21:04.172 "name": "BaseBdev4", 00:21:04.172 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:04.172 "is_configured": true, 00:21:04.172 "data_offset": 0, 00:21:04.172 "data_size": 65536 00:21:04.172 } 00:21:04.172 ] 00:21:04.172 }' 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.172 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.739 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:04.739 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:04.740 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.740 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.740 [2024-11-27 14:20:34.957911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.740 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:04.740 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:04.999 [2024-11-27 14:20:35.341780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:04.999 /dev/nbd0 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.999 1+0 records in 00:21:04.999 1+0 records out 00:21:04.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304584 s, 13.4 MB/s 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:04.999 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:21:05.566 512+0 records in 00:21:05.566 512+0 records out 00:21:05.566 100663296 bytes (101 MB, 96 MiB) copied, 0.640687 s, 157 MB/s 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.566 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:05.824 [2024-11-27 14:20:36.325480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.824 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.824 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.824 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.824 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.824 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.824 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.082 [2024-11-27 14:20:36.341058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.082 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.082 "name": "raid_bdev1", 00:21:06.082 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:06.082 "strip_size_kb": 64, 00:21:06.082 "state": "online", 00:21:06.082 "raid_level": "raid5f", 00:21:06.082 "superblock": false, 00:21:06.082 "num_base_bdevs": 4, 00:21:06.082 "num_base_bdevs_discovered": 3, 00:21:06.083 "num_base_bdevs_operational": 3, 00:21:06.083 "base_bdevs_list": [ 00:21:06.083 { 00:21:06.083 "name": null, 00:21:06.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.083 "is_configured": false, 00:21:06.083 "data_offset": 0, 00:21:06.083 "data_size": 65536 00:21:06.083 }, 00:21:06.083 { 00:21:06.083 "name": "BaseBdev2", 00:21:06.083 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:06.083 "is_configured": true, 00:21:06.083 "data_offset": 0, 00:21:06.083 "data_size": 65536 00:21:06.083 }, 00:21:06.083 { 00:21:06.083 "name": "BaseBdev3", 00:21:06.083 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:06.083 "is_configured": true, 00:21:06.083 "data_offset": 0, 00:21:06.083 "data_size": 65536 00:21:06.083 }, 00:21:06.083 { 00:21:06.083 "name": "BaseBdev4", 00:21:06.083 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:06.083 "is_configured": true, 00:21:06.083 "data_offset": 0, 00:21:06.083 "data_size": 65536 00:21:06.083 } 00:21:06.083 ] 00:21:06.083 }' 00:21:06.083 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.083 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:06.341 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.341 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 [2024-11-27 14:20:36.849199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.600 [2024-11-27 14:20:36.863589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:06.600 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.600 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:06.600 [2024-11-27 14:20:36.873054] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.536 "name": "raid_bdev1", 00:21:07.536 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:07.536 "strip_size_kb": 64, 00:21:07.536 "state": "online", 00:21:07.536 "raid_level": "raid5f", 00:21:07.536 "superblock": false, 00:21:07.536 "num_base_bdevs": 4, 00:21:07.536 "num_base_bdevs_discovered": 4, 00:21:07.536 "num_base_bdevs_operational": 4, 00:21:07.536 "process": { 00:21:07.536 "type": "rebuild", 00:21:07.536 "target": "spare", 00:21:07.536 "progress": { 00:21:07.536 "blocks": 17280, 00:21:07.536 "percent": 8 00:21:07.536 } 00:21:07.536 }, 00:21:07.536 "base_bdevs_list": [ 00:21:07.536 { 00:21:07.536 "name": "spare", 00:21:07.536 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:07.536 "is_configured": true, 00:21:07.536 "data_offset": 0, 00:21:07.536 "data_size": 65536 00:21:07.536 }, 00:21:07.536 { 00:21:07.536 "name": "BaseBdev2", 00:21:07.536 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:07.536 "is_configured": true, 00:21:07.536 "data_offset": 0, 00:21:07.536 "data_size": 65536 00:21:07.536 }, 00:21:07.536 { 00:21:07.536 "name": "BaseBdev3", 00:21:07.536 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:07.536 "is_configured": true, 00:21:07.536 "data_offset": 0, 00:21:07.536 "data_size": 65536 00:21:07.536 }, 00:21:07.536 { 00:21:07.536 "name": "BaseBdev4", 00:21:07.536 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:07.536 "is_configured": true, 00:21:07.536 "data_offset": 0, 00:21:07.536 "data_size": 65536 00:21:07.536 } 00:21:07.536 ] 00:21:07.536 }' 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.536 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.794 [2024-11-27 14:20:38.054318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.794 [2024-11-27 14:20:38.086336] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.794 [2024-11-27 14:20:38.086463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.794 [2024-11-27 14:20:38.086493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.794 [2024-11-27 14:20:38.086513] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.794 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.794 "name": "raid_bdev1", 00:21:07.794 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:07.794 "strip_size_kb": 64, 00:21:07.795 "state": "online", 00:21:07.795 "raid_level": "raid5f", 00:21:07.795 "superblock": false, 00:21:07.795 "num_base_bdevs": 4, 00:21:07.795 "num_base_bdevs_discovered": 3, 00:21:07.795 "num_base_bdevs_operational": 3, 00:21:07.795 "base_bdevs_list": [ 00:21:07.795 { 00:21:07.795 "name": null, 00:21:07.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.795 "is_configured": false, 00:21:07.795 "data_offset": 0, 00:21:07.795 "data_size": 65536 00:21:07.795 }, 00:21:07.795 { 00:21:07.795 "name": "BaseBdev2", 00:21:07.795 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:07.795 "is_configured": true, 00:21:07.795 "data_offset": 0, 00:21:07.795 "data_size": 65536 00:21:07.795 }, 00:21:07.795 { 00:21:07.795 "name": "BaseBdev3", 00:21:07.795 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:07.795 "is_configured": true, 00:21:07.795 "data_offset": 0, 00:21:07.795 "data_size": 65536 00:21:07.795 }, 00:21:07.795 { 00:21:07.795 "name": "BaseBdev4", 00:21:07.795 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:07.795 "is_configured": true, 00:21:07.795 "data_offset": 0, 00:21:07.795 "data_size": 65536 00:21:07.795 } 00:21:07.795 ] 00:21:07.795 }' 00:21:07.795 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.795 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.362 "name": "raid_bdev1", 00:21:08.362 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:08.362 "strip_size_kb": 64, 00:21:08.362 "state": "online", 00:21:08.362 "raid_level": "raid5f", 00:21:08.362 "superblock": false, 00:21:08.362 "num_base_bdevs": 4, 00:21:08.362 "num_base_bdevs_discovered": 3, 00:21:08.362 "num_base_bdevs_operational": 3, 00:21:08.362 "base_bdevs_list": [ 00:21:08.362 { 00:21:08.362 "name": null, 00:21:08.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.362 "is_configured": false, 00:21:08.362 "data_offset": 0, 00:21:08.362 "data_size": 65536 00:21:08.362 }, 00:21:08.362 { 00:21:08.362 "name": "BaseBdev2", 00:21:08.362 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:08.362 "is_configured": true, 00:21:08.362 "data_offset": 0, 00:21:08.362 "data_size": 65536 00:21:08.362 }, 00:21:08.362 { 00:21:08.362 "name": "BaseBdev3", 00:21:08.362 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:08.362 "is_configured": true, 00:21:08.362 "data_offset": 0, 00:21:08.362 "data_size": 65536 00:21:08.362 }, 00:21:08.362 { 00:21:08.362 "name": "BaseBdev4", 00:21:08.362 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:08.362 "is_configured": true, 00:21:08.362 "data_offset": 0, 00:21:08.362 "data_size": 65536 00:21:08.362 } 00:21:08.362 ] 00:21:08.362 }' 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.362 [2024-11-27 14:20:38.778331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.362 [2024-11-27 14:20:38.791667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.362 14:20:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:08.362 [2024-11-27 14:20:38.800488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.295 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.554 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.554 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.554 "name": "raid_bdev1", 00:21:09.554 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:09.554 "strip_size_kb": 64, 00:21:09.554 "state": "online", 00:21:09.554 "raid_level": "raid5f", 00:21:09.554 "superblock": false, 00:21:09.554 "num_base_bdevs": 4, 00:21:09.554 "num_base_bdevs_discovered": 4, 00:21:09.554 "num_base_bdevs_operational": 4, 00:21:09.554 "process": { 00:21:09.554 "type": "rebuild", 00:21:09.554 "target": "spare", 00:21:09.554 "progress": { 00:21:09.554 "blocks": 17280, 00:21:09.554 "percent": 8 00:21:09.554 } 00:21:09.554 }, 00:21:09.554 "base_bdevs_list": [ 00:21:09.554 { 00:21:09.554 "name": "spare", 00:21:09.554 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:09.554 "is_configured": true, 00:21:09.554 "data_offset": 0, 00:21:09.554 "data_size": 65536 00:21:09.554 }, 00:21:09.554 { 00:21:09.554 "name": "BaseBdev2", 00:21:09.554 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 }, 00:21:09.555 { 00:21:09.555 "name": "BaseBdev3", 00:21:09.555 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 }, 00:21:09.555 { 00:21:09.555 "name": "BaseBdev4", 00:21:09.555 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 } 00:21:09.555 ] 00:21:09.555 }' 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=681 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.555 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.555 14:20:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.555 "name": "raid_bdev1", 00:21:09.555 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:09.555 "strip_size_kb": 64, 00:21:09.555 "state": "online", 00:21:09.555 "raid_level": "raid5f", 00:21:09.555 "superblock": false, 00:21:09.555 "num_base_bdevs": 4, 00:21:09.555 "num_base_bdevs_discovered": 4, 00:21:09.555 "num_base_bdevs_operational": 4, 00:21:09.555 "process": { 00:21:09.555 "type": "rebuild", 00:21:09.555 "target": "spare", 00:21:09.555 "progress": { 00:21:09.555 "blocks": 21120, 00:21:09.555 "percent": 10 00:21:09.555 } 00:21:09.555 }, 00:21:09.555 "base_bdevs_list": [ 00:21:09.555 { 00:21:09.555 "name": "spare", 00:21:09.555 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 }, 00:21:09.555 { 00:21:09.555 "name": "BaseBdev2", 00:21:09.555 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 }, 00:21:09.555 { 00:21:09.555 "name": "BaseBdev3", 00:21:09.555 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 }, 00:21:09.555 { 00:21:09.555 "name": "BaseBdev4", 00:21:09.555 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:09.555 "is_configured": true, 00:21:09.555 "data_offset": 0, 00:21:09.555 "data_size": 65536 00:21:09.555 } 00:21:09.555 ] 00:21:09.555 }' 00:21:09.555 14:20:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.555 14:20:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.555 14:20:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.813 14:20:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.813 14:20:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.748 "name": "raid_bdev1", 00:21:10.748 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:10.748 "strip_size_kb": 64, 00:21:10.748 "state": "online", 00:21:10.748 "raid_level": "raid5f", 00:21:10.748 "superblock": false, 00:21:10.748 "num_base_bdevs": 4, 00:21:10.748 "num_base_bdevs_discovered": 4, 00:21:10.748 "num_base_bdevs_operational": 4, 00:21:10.748 "process": { 00:21:10.748 "type": "rebuild", 00:21:10.748 "target": "spare", 00:21:10.748 "progress": { 00:21:10.748 "blocks": 44160, 00:21:10.748 "percent": 22 00:21:10.748 } 00:21:10.748 }, 00:21:10.748 "base_bdevs_list": [ 00:21:10.748 { 00:21:10.748 "name": "spare", 00:21:10.748 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:10.748 "is_configured": true, 00:21:10.748 "data_offset": 0, 00:21:10.748 "data_size": 65536 00:21:10.748 }, 00:21:10.748 { 00:21:10.748 "name": "BaseBdev2", 00:21:10.748 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:10.748 "is_configured": true, 00:21:10.748 "data_offset": 0, 00:21:10.748 "data_size": 65536 00:21:10.748 }, 00:21:10.748 { 00:21:10.748 "name": "BaseBdev3", 00:21:10.748 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:10.748 "is_configured": true, 00:21:10.748 "data_offset": 0, 00:21:10.748 "data_size": 65536 00:21:10.748 }, 00:21:10.748 { 00:21:10.748 "name": "BaseBdev4", 00:21:10.748 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:10.748 "is_configured": true, 00:21:10.748 "data_offset": 0, 00:21:10.748 "data_size": 65536 00:21:10.748 } 00:21:10.748 ] 00:21:10.748 }' 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.748 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.006 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.006 14:20:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.944 "name": "raid_bdev1", 00:21:11.944 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:11.944 "strip_size_kb": 64, 00:21:11.944 "state": "online", 00:21:11.944 "raid_level": "raid5f", 00:21:11.944 "superblock": false, 00:21:11.944 "num_base_bdevs": 4, 00:21:11.944 "num_base_bdevs_discovered": 4, 00:21:11.944 "num_base_bdevs_operational": 4, 00:21:11.944 "process": { 00:21:11.944 "type": "rebuild", 00:21:11.944 "target": "spare", 00:21:11.944 "progress": { 00:21:11.944 "blocks": 65280, 00:21:11.944 "percent": 33 00:21:11.944 } 00:21:11.944 }, 00:21:11.944 "base_bdevs_list": [ 00:21:11.944 { 00:21:11.944 "name": "spare", 00:21:11.944 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:11.944 "is_configured": true, 00:21:11.944 "data_offset": 0, 00:21:11.944 "data_size": 65536 00:21:11.944 }, 00:21:11.944 { 00:21:11.944 "name": "BaseBdev2", 00:21:11.944 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:11.944 "is_configured": true, 00:21:11.944 "data_offset": 0, 00:21:11.944 "data_size": 65536 00:21:11.944 }, 00:21:11.944 { 00:21:11.944 "name": "BaseBdev3", 00:21:11.944 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:11.944 "is_configured": true, 00:21:11.944 "data_offset": 0, 00:21:11.944 "data_size": 65536 00:21:11.944 }, 00:21:11.944 { 00:21:11.944 "name": "BaseBdev4", 00:21:11.944 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:11.944 "is_configured": true, 00:21:11.944 "data_offset": 0, 00:21:11.944 "data_size": 65536 00:21:11.944 } 00:21:11.944 ] 00:21:11.944 }' 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.944 14:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.314 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.314 "name": "raid_bdev1", 00:21:13.314 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:13.314 "strip_size_kb": 64, 00:21:13.314 "state": "online", 00:21:13.314 "raid_level": "raid5f", 00:21:13.314 "superblock": false, 00:21:13.314 "num_base_bdevs": 4, 00:21:13.314 "num_base_bdevs_discovered": 4, 00:21:13.314 "num_base_bdevs_operational": 4, 00:21:13.314 "process": { 00:21:13.315 "type": "rebuild", 00:21:13.315 "target": "spare", 00:21:13.315 "progress": { 00:21:13.315 "blocks": 86400, 00:21:13.315 "percent": 43 00:21:13.315 } 00:21:13.315 }, 00:21:13.315 "base_bdevs_list": [ 00:21:13.315 { 00:21:13.315 "name": "spare", 00:21:13.315 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:13.315 "is_configured": true, 00:21:13.315 "data_offset": 0, 00:21:13.315 "data_size": 65536 00:21:13.315 }, 00:21:13.315 { 00:21:13.315 "name": "BaseBdev2", 00:21:13.315 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:13.315 "is_configured": true, 00:21:13.315 "data_offset": 0, 00:21:13.315 "data_size": 65536 00:21:13.315 }, 00:21:13.315 { 00:21:13.315 "name": "BaseBdev3", 00:21:13.315 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:13.315 "is_configured": true, 00:21:13.315 "data_offset": 0, 00:21:13.315 "data_size": 65536 00:21:13.315 }, 00:21:13.315 { 00:21:13.315 "name": "BaseBdev4", 00:21:13.315 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:13.315 "is_configured": true, 00:21:13.315 "data_offset": 0, 00:21:13.315 "data_size": 65536 00:21:13.315 } 00:21:13.315 ] 00:21:13.315 }' 00:21:13.315 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.315 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.315 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.315 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.315 14:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.248 "name": "raid_bdev1", 00:21:14.248 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:14.248 "strip_size_kb": 64, 00:21:14.248 "state": "online", 00:21:14.248 "raid_level": "raid5f", 00:21:14.248 "superblock": false, 00:21:14.248 "num_base_bdevs": 4, 00:21:14.248 "num_base_bdevs_discovered": 4, 00:21:14.248 "num_base_bdevs_operational": 4, 00:21:14.248 "process": { 00:21:14.248 "type": "rebuild", 00:21:14.248 "target": "spare", 00:21:14.248 "progress": { 00:21:14.248 "blocks": 109440, 00:21:14.248 "percent": 55 00:21:14.248 } 00:21:14.248 }, 00:21:14.248 "base_bdevs_list": [ 00:21:14.248 { 00:21:14.248 "name": "spare", 00:21:14.248 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:14.248 "is_configured": true, 00:21:14.248 "data_offset": 0, 00:21:14.248 "data_size": 65536 00:21:14.248 }, 00:21:14.248 { 00:21:14.248 "name": "BaseBdev2", 00:21:14.248 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:14.248 "is_configured": true, 00:21:14.248 "data_offset": 0, 00:21:14.248 "data_size": 65536 00:21:14.248 }, 00:21:14.248 { 00:21:14.248 "name": "BaseBdev3", 00:21:14.248 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:14.248 "is_configured": true, 00:21:14.248 "data_offset": 0, 00:21:14.248 "data_size": 65536 00:21:14.248 }, 00:21:14.248 { 00:21:14.248 "name": "BaseBdev4", 00:21:14.248 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:14.248 "is_configured": true, 00:21:14.248 "data_offset": 0, 00:21:14.248 "data_size": 65536 00:21:14.248 } 00:21:14.248 ] 00:21:14.248 }' 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.248 14:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.621 "name": "raid_bdev1", 00:21:15.621 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:15.621 "strip_size_kb": 64, 00:21:15.621 "state": "online", 00:21:15.621 "raid_level": "raid5f", 00:21:15.621 "superblock": false, 00:21:15.621 "num_base_bdevs": 4, 00:21:15.621 "num_base_bdevs_discovered": 4, 00:21:15.621 "num_base_bdevs_operational": 4, 00:21:15.621 "process": { 00:21:15.621 "type": "rebuild", 00:21:15.621 "target": "spare", 00:21:15.621 "progress": { 00:21:15.621 "blocks": 130560, 00:21:15.621 "percent": 66 00:21:15.621 } 00:21:15.621 }, 00:21:15.621 "base_bdevs_list": [ 00:21:15.621 { 00:21:15.621 "name": "spare", 00:21:15.621 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:15.621 "is_configured": true, 00:21:15.621 "data_offset": 0, 00:21:15.621 "data_size": 65536 00:21:15.621 }, 00:21:15.621 { 00:21:15.621 "name": "BaseBdev2", 00:21:15.621 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:15.621 "is_configured": true, 00:21:15.621 "data_offset": 0, 00:21:15.621 "data_size": 65536 00:21:15.621 }, 00:21:15.621 { 00:21:15.621 "name": "BaseBdev3", 00:21:15.621 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:15.621 "is_configured": true, 00:21:15.621 "data_offset": 0, 00:21:15.621 "data_size": 65536 00:21:15.621 }, 00:21:15.621 { 00:21:15.621 "name": "BaseBdev4", 00:21:15.621 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:15.621 "is_configured": true, 00:21:15.621 "data_offset": 0, 00:21:15.621 "data_size": 65536 00:21:15.621 } 00:21:15.621 ] 00:21:15.621 }' 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.621 14:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.558 "name": "raid_bdev1", 00:21:16.558 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:16.558 "strip_size_kb": 64, 00:21:16.558 "state": "online", 00:21:16.558 "raid_level": "raid5f", 00:21:16.558 "superblock": false, 00:21:16.558 "num_base_bdevs": 4, 00:21:16.558 "num_base_bdevs_discovered": 4, 00:21:16.558 "num_base_bdevs_operational": 4, 00:21:16.558 "process": { 00:21:16.558 "type": "rebuild", 00:21:16.558 "target": "spare", 00:21:16.558 "progress": { 00:21:16.558 "blocks": 153600, 00:21:16.558 "percent": 78 00:21:16.558 } 00:21:16.558 }, 00:21:16.558 "base_bdevs_list": [ 00:21:16.558 { 00:21:16.558 "name": "spare", 00:21:16.558 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:16.558 "is_configured": true, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 }, 00:21:16.558 { 00:21:16.558 "name": "BaseBdev2", 00:21:16.558 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:16.558 "is_configured": true, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 }, 00:21:16.558 { 00:21:16.558 "name": "BaseBdev3", 00:21:16.558 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:16.558 "is_configured": true, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 }, 00:21:16.558 { 00:21:16.558 "name": "BaseBdev4", 00:21:16.558 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:16.558 "is_configured": true, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 } 00:21:16.558 ] 00:21:16.558 }' 00:21:16.558 14:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.558 14:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.558 14:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.838 14:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.838 14:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.779 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.779 "name": "raid_bdev1", 00:21:17.779 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:17.779 "strip_size_kb": 64, 00:21:17.779 "state": "online", 00:21:17.779 "raid_level": "raid5f", 00:21:17.779 "superblock": false, 00:21:17.779 "num_base_bdevs": 4, 00:21:17.779 "num_base_bdevs_discovered": 4, 00:21:17.779 "num_base_bdevs_operational": 4, 00:21:17.779 "process": { 00:21:17.779 "type": "rebuild", 00:21:17.780 "target": "spare", 00:21:17.780 "progress": { 00:21:17.780 "blocks": 174720, 00:21:17.780 "percent": 88 00:21:17.780 } 00:21:17.780 }, 00:21:17.780 "base_bdevs_list": [ 00:21:17.780 { 00:21:17.780 "name": "spare", 00:21:17.780 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:17.780 "is_configured": true, 00:21:17.780 "data_offset": 0, 00:21:17.780 "data_size": 65536 00:21:17.780 }, 00:21:17.780 { 00:21:17.780 "name": "BaseBdev2", 00:21:17.780 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:17.780 "is_configured": true, 00:21:17.780 "data_offset": 0, 00:21:17.780 "data_size": 65536 00:21:17.780 }, 00:21:17.780 { 00:21:17.780 "name": "BaseBdev3", 00:21:17.780 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:17.780 "is_configured": true, 00:21:17.780 "data_offset": 0, 00:21:17.780 "data_size": 65536 00:21:17.780 }, 00:21:17.780 { 00:21:17.780 "name": "BaseBdev4", 00:21:17.780 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:17.780 "is_configured": true, 00:21:17.780 "data_offset": 0, 00:21:17.780 "data_size": 65536 00:21:17.780 } 00:21:17.780 ] 00:21:17.780 }' 00:21:17.780 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.780 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.780 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.780 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.780 14:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.714 [2024-11-27 14:20:49.212875] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:18.714 [2024-11-27 14:20:49.213005] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:18.714 [2024-11-27 14:20:49.213074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.970 "name": "raid_bdev1", 00:21:18.970 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:18.970 "strip_size_kb": 64, 00:21:18.970 "state": "online", 00:21:18.970 "raid_level": "raid5f", 00:21:18.970 "superblock": false, 00:21:18.970 "num_base_bdevs": 4, 00:21:18.970 "num_base_bdevs_discovered": 4, 00:21:18.970 "num_base_bdevs_operational": 4, 00:21:18.970 "base_bdevs_list": [ 00:21:18.970 { 00:21:18.970 "name": "spare", 00:21:18.970 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 }, 00:21:18.970 { 00:21:18.970 "name": "BaseBdev2", 00:21:18.970 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 }, 00:21:18.970 { 00:21:18.970 "name": "BaseBdev3", 00:21:18.970 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 }, 00:21:18.970 { 00:21:18.970 "name": "BaseBdev4", 00:21:18.970 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 } 00:21:18.970 ] 00:21:18.970 }' 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.970 "name": "raid_bdev1", 00:21:18.970 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:18.970 "strip_size_kb": 64, 00:21:18.970 "state": "online", 00:21:18.970 "raid_level": "raid5f", 00:21:18.970 "superblock": false, 00:21:18.970 "num_base_bdevs": 4, 00:21:18.970 "num_base_bdevs_discovered": 4, 00:21:18.970 "num_base_bdevs_operational": 4, 00:21:18.970 "base_bdevs_list": [ 00:21:18.970 { 00:21:18.970 "name": "spare", 00:21:18.970 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 }, 00:21:18.970 { 00:21:18.970 "name": "BaseBdev2", 00:21:18.970 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 }, 00:21:18.970 { 00:21:18.970 "name": "BaseBdev3", 00:21:18.970 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 }, 00:21:18.970 { 00:21:18.970 "name": "BaseBdev4", 00:21:18.970 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:18.970 "is_configured": true, 00:21:18.970 "data_offset": 0, 00:21:18.970 "data_size": 65536 00:21:18.970 } 00:21:18.970 ] 00:21:18.970 }' 00:21:18.970 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.228 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.228 "name": "raid_bdev1", 00:21:19.228 "uuid": "14a10ac2-94f5-4cf0-adc7-3e65d6889715", 00:21:19.228 "strip_size_kb": 64, 00:21:19.228 "state": "online", 00:21:19.228 "raid_level": "raid5f", 00:21:19.228 "superblock": false, 00:21:19.228 "num_base_bdevs": 4, 00:21:19.228 "num_base_bdevs_discovered": 4, 00:21:19.228 "num_base_bdevs_operational": 4, 00:21:19.228 "base_bdevs_list": [ 00:21:19.228 { 00:21:19.228 "name": "spare", 00:21:19.228 "uuid": "b3b2ddea-ee3c-5143-bc29-258ce985c22e", 00:21:19.229 "is_configured": true, 00:21:19.229 "data_offset": 0, 00:21:19.229 "data_size": 65536 00:21:19.229 }, 00:21:19.229 { 00:21:19.229 "name": "BaseBdev2", 00:21:19.229 "uuid": "8f176469-4f9e-5d55-ad3f-d29c5487e6d3", 00:21:19.229 "is_configured": true, 00:21:19.229 "data_offset": 0, 00:21:19.229 "data_size": 65536 00:21:19.229 }, 00:21:19.229 { 00:21:19.229 "name": "BaseBdev3", 00:21:19.229 "uuid": "82608d70-66b3-53d0-8868-cbf524a55d79", 00:21:19.229 "is_configured": true, 00:21:19.229 "data_offset": 0, 00:21:19.229 "data_size": 65536 00:21:19.229 }, 00:21:19.229 { 00:21:19.229 "name": "BaseBdev4", 00:21:19.229 "uuid": "8315963e-c23b-5241-a83b-4f0afd2e6ba9", 00:21:19.229 "is_configured": true, 00:21:19.229 "data_offset": 0, 00:21:19.229 "data_size": 65536 00:21:19.229 } 00:21:19.229 ] 00:21:19.229 }' 00:21:19.229 14:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.229 14:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.796 [2024-11-27 14:20:50.112739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.796 [2024-11-27 14:20:50.112809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.796 [2024-11-27 14:20:50.112928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.796 [2024-11-27 14:20:50.113052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.796 [2024-11-27 14:20:50.113081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:19.796 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:20.053 /dev/nbd0 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.053 1+0 records in 00:21:20.053 1+0 records out 00:21:20.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329001 s, 12.4 MB/s 00:21:20.053 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:20.054 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:20.618 /dev/nbd1 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.618 1+0 records in 00:21:20.618 1+0 records out 00:21:20.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404486 s, 10.1 MB/s 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:20.618 14:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:20.618 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:20.619 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:20.619 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:20.619 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:20.619 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:20.619 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.619 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.876 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85254 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85254 ']' 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85254 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85254 00:21:21.441 killing process with pid 85254 00:21:21.441 Received shutdown signal, test time was about 60.000000 seconds 00:21:21.441 00:21:21.441 Latency(us) 00:21:21.441 [2024-11-27T14:20:51.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.441 [2024-11-27T14:20:51.954Z] =================================================================================================================== 00:21:21.441 [2024-11-27T14:20:51.954Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85254' 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85254 00:21:21.441 14:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85254 00:21:21.441 [2024-11-27 14:20:51.711110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:21.698 [2024-11-27 14:20:52.163574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:23.111 00:21:23.111 real 0m20.107s 00:21:23.111 user 0m25.027s 00:21:23.111 sys 0m2.248s 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.111 ************************************ 00:21:23.111 END TEST raid5f_rebuild_test 00:21:23.111 ************************************ 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.111 14:20:53 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:21:23.111 14:20:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:23.111 14:20:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.111 14:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:23.111 ************************************ 00:21:23.111 START TEST raid5f_rebuild_test_sb 00:21:23.111 ************************************ 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85759 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85759 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85759 ']' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.111 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.111 [2024-11-27 14:20:53.381863] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:21:23.111 [2024-11-27 14:20:53.382015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85759 ] 00:21:23.111 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:23.111 Zero copy mechanism will not be used. 00:21:23.111 [2024-11-27 14:20:53.556968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.370 [2024-11-27 14:20:53.690302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.628 [2024-11-27 14:20:53.895317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.628 [2024-11-27 14:20:53.895379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 BaseBdev1_malloc 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 [2024-11-27 14:20:54.493287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:24.195 [2024-11-27 14:20:54.493375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.195 [2024-11-27 14:20:54.493408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:24.195 [2024-11-27 14:20:54.493427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.195 [2024-11-27 14:20:54.496360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.195 [2024-11-27 14:20:54.496420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.195 BaseBdev1 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 BaseBdev2_malloc 00:21:24.195 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.196 [2024-11-27 14:20:54.546069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:24.196 [2024-11-27 14:20:54.546164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.196 [2024-11-27 14:20:54.546200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:24.196 [2024-11-27 14:20:54.546218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.196 [2024-11-27 14:20:54.549020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.196 [2024-11-27 14:20:54.549071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:24.196 BaseBdev2 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.196 BaseBdev3_malloc 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.196 [2024-11-27 14:20:54.614723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:24.196 [2024-11-27 14:20:54.614813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.196 [2024-11-27 14:20:54.614876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:24.196 [2024-11-27 14:20:54.614901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.196 [2024-11-27 14:20:54.617628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.196 [2024-11-27 14:20:54.617681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:24.196 BaseBdev3 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.196 BaseBdev4_malloc 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.196 [2024-11-27 14:20:54.667582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:24.196 [2024-11-27 14:20:54.667663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.196 [2024-11-27 14:20:54.667694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:24.196 [2024-11-27 14:20:54.667712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.196 [2024-11-27 14:20:54.670513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.196 [2024-11-27 14:20:54.670573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:24.196 BaseBdev4 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.196 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 spare_malloc 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 spare_delay 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 [2024-11-27 14:20:54.727821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:24.455 [2024-11-27 14:20:54.727909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.455 [2024-11-27 14:20:54.727939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:24.455 [2024-11-27 14:20:54.727957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.455 [2024-11-27 14:20:54.730788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.455 [2024-11-27 14:20:54.730879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:24.455 spare 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 [2024-11-27 14:20:54.735897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:24.455 [2024-11-27 14:20:54.738333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:24.455 [2024-11-27 14:20:54.738428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:24.455 [2024-11-27 14:20:54.738512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:24.455 [2024-11-27 14:20:54.738773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:24.455 [2024-11-27 14:20:54.738806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:24.455 [2024-11-27 14:20:54.739150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:24.455 [2024-11-27 14:20:54.746473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:24.455 [2024-11-27 14:20:54.746531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:24.455 [2024-11-27 14:20:54.746806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.455 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.456 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.456 "name": "raid_bdev1", 00:21:24.456 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:24.456 "strip_size_kb": 64, 00:21:24.456 "state": "online", 00:21:24.456 "raid_level": "raid5f", 00:21:24.456 "superblock": true, 00:21:24.456 "num_base_bdevs": 4, 00:21:24.456 "num_base_bdevs_discovered": 4, 00:21:24.456 "num_base_bdevs_operational": 4, 00:21:24.456 "base_bdevs_list": [ 00:21:24.456 { 00:21:24.456 "name": "BaseBdev1", 00:21:24.456 "uuid": "d05c937c-cb37-5481-9c25-79c2d22663e7", 00:21:24.456 "is_configured": true, 00:21:24.456 "data_offset": 2048, 00:21:24.456 "data_size": 63488 00:21:24.456 }, 00:21:24.456 { 00:21:24.456 "name": "BaseBdev2", 00:21:24.456 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:24.456 "is_configured": true, 00:21:24.456 "data_offset": 2048, 00:21:24.456 "data_size": 63488 00:21:24.456 }, 00:21:24.456 { 00:21:24.456 "name": "BaseBdev3", 00:21:24.456 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:24.456 "is_configured": true, 00:21:24.456 "data_offset": 2048, 00:21:24.456 "data_size": 63488 00:21:24.456 }, 00:21:24.456 { 00:21:24.456 "name": "BaseBdev4", 00:21:24.456 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:24.456 "is_configured": true, 00:21:24.456 "data_offset": 2048, 00:21:24.456 "data_size": 63488 00:21:24.456 } 00:21:24.456 ] 00:21:24.456 }' 00:21:24.456 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.456 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.022 [2024-11-27 14:20:55.258665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.022 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.023 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:25.281 [2024-11-27 14:20:55.718757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:25.281 /dev/nbd0 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.281 1+0 records in 00:21:25.281 1+0 records out 00:21:25.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034952 s, 11.7 MB/s 00:21:25.281 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:25.539 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:21:26.106 496+0 records in 00:21:26.106 496+0 records out 00:21:26.106 97517568 bytes (98 MB, 93 MiB) copied, 0.803853 s, 121 MB/s 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.106 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:26.671 [2024-11-27 14:20:56.961071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.671 [2024-11-27 14:20:56.995362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.671 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.671 "name": "raid_bdev1", 00:21:26.671 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:26.671 "strip_size_kb": 64, 00:21:26.671 "state": "online", 00:21:26.671 "raid_level": "raid5f", 00:21:26.671 "superblock": true, 00:21:26.671 "num_base_bdevs": 4, 00:21:26.671 "num_base_bdevs_discovered": 3, 00:21:26.671 "num_base_bdevs_operational": 3, 00:21:26.671 "base_bdevs_list": [ 00:21:26.671 { 00:21:26.671 "name": null, 00:21:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.671 "is_configured": false, 00:21:26.671 "data_offset": 0, 00:21:26.671 "data_size": 63488 00:21:26.671 }, 00:21:26.671 { 00:21:26.671 "name": "BaseBdev2", 00:21:26.671 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:26.671 "is_configured": true, 00:21:26.671 "data_offset": 2048, 00:21:26.671 "data_size": 63488 00:21:26.671 }, 00:21:26.671 { 00:21:26.671 "name": "BaseBdev3", 00:21:26.671 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:26.671 "is_configured": true, 00:21:26.671 "data_offset": 2048, 00:21:26.671 "data_size": 63488 00:21:26.671 }, 00:21:26.671 { 00:21:26.671 "name": "BaseBdev4", 00:21:26.671 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:26.671 "is_configured": true, 00:21:26.671 "data_offset": 2048, 00:21:26.671 "data_size": 63488 00:21:26.671 } 00:21:26.671 ] 00:21:26.671 }' 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.671 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.237 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:27.237 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.237 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.237 [2024-11-27 14:20:57.519507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.237 [2024-11-27 14:20:57.534197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:21:27.237 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.237 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:27.237 [2024-11-27 14:20:57.543593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.171 "name": "raid_bdev1", 00:21:28.171 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:28.171 "strip_size_kb": 64, 00:21:28.171 "state": "online", 00:21:28.171 "raid_level": "raid5f", 00:21:28.171 "superblock": true, 00:21:28.171 "num_base_bdevs": 4, 00:21:28.171 "num_base_bdevs_discovered": 4, 00:21:28.171 "num_base_bdevs_operational": 4, 00:21:28.171 "process": { 00:21:28.171 "type": "rebuild", 00:21:28.171 "target": "spare", 00:21:28.171 "progress": { 00:21:28.171 "blocks": 17280, 00:21:28.171 "percent": 9 00:21:28.171 } 00:21:28.171 }, 00:21:28.171 "base_bdevs_list": [ 00:21:28.171 { 00:21:28.171 "name": "spare", 00:21:28.171 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:28.171 "is_configured": true, 00:21:28.171 "data_offset": 2048, 00:21:28.171 "data_size": 63488 00:21:28.171 }, 00:21:28.171 { 00:21:28.171 "name": "BaseBdev2", 00:21:28.171 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:28.171 "is_configured": true, 00:21:28.171 "data_offset": 2048, 00:21:28.171 "data_size": 63488 00:21:28.171 }, 00:21:28.171 { 00:21:28.171 "name": "BaseBdev3", 00:21:28.171 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:28.171 "is_configured": true, 00:21:28.171 "data_offset": 2048, 00:21:28.171 "data_size": 63488 00:21:28.171 }, 00:21:28.171 { 00:21:28.171 "name": "BaseBdev4", 00:21:28.171 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:28.171 "is_configured": true, 00:21:28.171 "data_offset": 2048, 00:21:28.171 "data_size": 63488 00:21:28.171 } 00:21:28.171 ] 00:21:28.171 }' 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.171 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.429 [2024-11-27 14:20:58.705536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.429 [2024-11-27 14:20:58.757745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:28.429 [2024-11-27 14:20:58.757898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.429 [2024-11-27 14:20:58.757933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.429 [2024-11-27 14:20:58.757953] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.429 "name": "raid_bdev1", 00:21:28.429 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:28.429 "strip_size_kb": 64, 00:21:28.429 "state": "online", 00:21:28.429 "raid_level": "raid5f", 00:21:28.429 "superblock": true, 00:21:28.429 "num_base_bdevs": 4, 00:21:28.429 "num_base_bdevs_discovered": 3, 00:21:28.429 "num_base_bdevs_operational": 3, 00:21:28.429 "base_bdevs_list": [ 00:21:28.429 { 00:21:28.429 "name": null, 00:21:28.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.429 "is_configured": false, 00:21:28.429 "data_offset": 0, 00:21:28.429 "data_size": 63488 00:21:28.429 }, 00:21:28.429 { 00:21:28.429 "name": "BaseBdev2", 00:21:28.429 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:28.429 "is_configured": true, 00:21:28.429 "data_offset": 2048, 00:21:28.429 "data_size": 63488 00:21:28.429 }, 00:21:28.429 { 00:21:28.429 "name": "BaseBdev3", 00:21:28.429 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:28.429 "is_configured": true, 00:21:28.429 "data_offset": 2048, 00:21:28.429 "data_size": 63488 00:21:28.429 }, 00:21:28.429 { 00:21:28.429 "name": "BaseBdev4", 00:21:28.429 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:28.429 "is_configured": true, 00:21:28.429 "data_offset": 2048, 00:21:28.429 "data_size": 63488 00:21:28.429 } 00:21:28.429 ] 00:21:28.429 }' 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.429 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.022 "name": "raid_bdev1", 00:21:29.022 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:29.022 "strip_size_kb": 64, 00:21:29.022 "state": "online", 00:21:29.022 "raid_level": "raid5f", 00:21:29.022 "superblock": true, 00:21:29.022 "num_base_bdevs": 4, 00:21:29.022 "num_base_bdevs_discovered": 3, 00:21:29.022 "num_base_bdevs_operational": 3, 00:21:29.022 "base_bdevs_list": [ 00:21:29.022 { 00:21:29.022 "name": null, 00:21:29.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.022 "is_configured": false, 00:21:29.022 "data_offset": 0, 00:21:29.022 "data_size": 63488 00:21:29.022 }, 00:21:29.022 { 00:21:29.022 "name": "BaseBdev2", 00:21:29.022 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:29.022 "is_configured": true, 00:21:29.022 "data_offset": 2048, 00:21:29.022 "data_size": 63488 00:21:29.022 }, 00:21:29.022 { 00:21:29.022 "name": "BaseBdev3", 00:21:29.022 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:29.022 "is_configured": true, 00:21:29.022 "data_offset": 2048, 00:21:29.022 "data_size": 63488 00:21:29.022 }, 00:21:29.022 { 00:21:29.022 "name": "BaseBdev4", 00:21:29.022 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:29.022 "is_configured": true, 00:21:29.022 "data_offset": 2048, 00:21:29.022 "data_size": 63488 00:21:29.022 } 00:21:29.022 ] 00:21:29.022 }' 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.022 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 [2024-11-27 14:20:59.524369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.279 [2024-11-27 14:20:59.539264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:21:29.279 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.279 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:29.279 [2024-11-27 14:20:59.549269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.215 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.215 "name": "raid_bdev1", 00:21:30.215 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:30.215 "strip_size_kb": 64, 00:21:30.215 "state": "online", 00:21:30.215 "raid_level": "raid5f", 00:21:30.215 "superblock": true, 00:21:30.215 "num_base_bdevs": 4, 00:21:30.215 "num_base_bdevs_discovered": 4, 00:21:30.215 "num_base_bdevs_operational": 4, 00:21:30.215 "process": { 00:21:30.215 "type": "rebuild", 00:21:30.215 "target": "spare", 00:21:30.215 "progress": { 00:21:30.215 "blocks": 17280, 00:21:30.215 "percent": 9 00:21:30.215 } 00:21:30.215 }, 00:21:30.215 "base_bdevs_list": [ 00:21:30.215 { 00:21:30.215 "name": "spare", 00:21:30.215 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:30.215 "is_configured": true, 00:21:30.215 "data_offset": 2048, 00:21:30.215 "data_size": 63488 00:21:30.215 }, 00:21:30.215 { 00:21:30.215 "name": "BaseBdev2", 00:21:30.215 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:30.215 "is_configured": true, 00:21:30.215 "data_offset": 2048, 00:21:30.215 "data_size": 63488 00:21:30.215 }, 00:21:30.215 { 00:21:30.215 "name": "BaseBdev3", 00:21:30.215 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:30.215 "is_configured": true, 00:21:30.215 "data_offset": 2048, 00:21:30.215 "data_size": 63488 00:21:30.215 }, 00:21:30.215 { 00:21:30.215 "name": "BaseBdev4", 00:21:30.215 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:30.215 "is_configured": true, 00:21:30.215 "data_offset": 2048, 00:21:30.215 "data_size": 63488 00:21:30.215 } 00:21:30.216 ] 00:21:30.216 }' 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:30.216 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=702 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.216 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.474 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.474 "name": "raid_bdev1", 00:21:30.474 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:30.474 "strip_size_kb": 64, 00:21:30.474 "state": "online", 00:21:30.474 "raid_level": "raid5f", 00:21:30.474 "superblock": true, 00:21:30.474 "num_base_bdevs": 4, 00:21:30.474 "num_base_bdevs_discovered": 4, 00:21:30.474 "num_base_bdevs_operational": 4, 00:21:30.474 "process": { 00:21:30.474 "type": "rebuild", 00:21:30.474 "target": "spare", 00:21:30.474 "progress": { 00:21:30.474 "blocks": 21120, 00:21:30.474 "percent": 11 00:21:30.474 } 00:21:30.474 }, 00:21:30.474 "base_bdevs_list": [ 00:21:30.474 { 00:21:30.474 "name": "spare", 00:21:30.474 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:30.474 "is_configured": true, 00:21:30.474 "data_offset": 2048, 00:21:30.474 "data_size": 63488 00:21:30.474 }, 00:21:30.474 { 00:21:30.474 "name": "BaseBdev2", 00:21:30.474 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:30.474 "is_configured": true, 00:21:30.474 "data_offset": 2048, 00:21:30.474 "data_size": 63488 00:21:30.474 }, 00:21:30.474 { 00:21:30.474 "name": "BaseBdev3", 00:21:30.474 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:30.474 "is_configured": true, 00:21:30.474 "data_offset": 2048, 00:21:30.474 "data_size": 63488 00:21:30.474 }, 00:21:30.474 { 00:21:30.474 "name": "BaseBdev4", 00:21:30.474 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:30.474 "is_configured": true, 00:21:30.474 "data_offset": 2048, 00:21:30.474 "data_size": 63488 00:21:30.474 } 00:21:30.474 ] 00:21:30.474 }' 00:21:30.474 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.474 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.474 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.474 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.474 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.409 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.409 "name": "raid_bdev1", 00:21:31.409 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:31.409 "strip_size_kb": 64, 00:21:31.409 "state": "online", 00:21:31.409 "raid_level": "raid5f", 00:21:31.409 "superblock": true, 00:21:31.409 "num_base_bdevs": 4, 00:21:31.409 "num_base_bdevs_discovered": 4, 00:21:31.409 "num_base_bdevs_operational": 4, 00:21:31.409 "process": { 00:21:31.409 "type": "rebuild", 00:21:31.409 "target": "spare", 00:21:31.409 "progress": { 00:21:31.409 "blocks": 42240, 00:21:31.409 "percent": 22 00:21:31.409 } 00:21:31.409 }, 00:21:31.409 "base_bdevs_list": [ 00:21:31.409 { 00:21:31.409 "name": "spare", 00:21:31.409 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:31.409 "is_configured": true, 00:21:31.409 "data_offset": 2048, 00:21:31.409 "data_size": 63488 00:21:31.409 }, 00:21:31.409 { 00:21:31.409 "name": "BaseBdev2", 00:21:31.409 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:31.409 "is_configured": true, 00:21:31.409 "data_offset": 2048, 00:21:31.409 "data_size": 63488 00:21:31.409 }, 00:21:31.409 { 00:21:31.409 "name": "BaseBdev3", 00:21:31.409 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:31.409 "is_configured": true, 00:21:31.409 "data_offset": 2048, 00:21:31.409 "data_size": 63488 00:21:31.409 }, 00:21:31.409 { 00:21:31.410 "name": "BaseBdev4", 00:21:31.410 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:31.410 "is_configured": true, 00:21:31.410 "data_offset": 2048, 00:21:31.410 "data_size": 63488 00:21:31.410 } 00:21:31.410 ] 00:21:31.410 }' 00:21:31.410 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.668 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.668 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.668 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.668 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.603 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.603 "name": "raid_bdev1", 00:21:32.603 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:32.603 "strip_size_kb": 64, 00:21:32.603 "state": "online", 00:21:32.603 "raid_level": "raid5f", 00:21:32.603 "superblock": true, 00:21:32.603 "num_base_bdevs": 4, 00:21:32.603 "num_base_bdevs_discovered": 4, 00:21:32.603 "num_base_bdevs_operational": 4, 00:21:32.603 "process": { 00:21:32.603 "type": "rebuild", 00:21:32.603 "target": "spare", 00:21:32.603 "progress": { 00:21:32.603 "blocks": 65280, 00:21:32.603 "percent": 34 00:21:32.603 } 00:21:32.603 }, 00:21:32.603 "base_bdevs_list": [ 00:21:32.603 { 00:21:32.603 "name": "spare", 00:21:32.603 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:32.603 "is_configured": true, 00:21:32.603 "data_offset": 2048, 00:21:32.603 "data_size": 63488 00:21:32.603 }, 00:21:32.603 { 00:21:32.603 "name": "BaseBdev2", 00:21:32.603 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:32.603 "is_configured": true, 00:21:32.603 "data_offset": 2048, 00:21:32.603 "data_size": 63488 00:21:32.603 }, 00:21:32.603 { 00:21:32.603 "name": "BaseBdev3", 00:21:32.603 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:32.604 "is_configured": true, 00:21:32.604 "data_offset": 2048, 00:21:32.604 "data_size": 63488 00:21:32.604 }, 00:21:32.604 { 00:21:32.604 "name": "BaseBdev4", 00:21:32.604 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:32.604 "is_configured": true, 00:21:32.604 "data_offset": 2048, 00:21:32.604 "data_size": 63488 00:21:32.604 } 00:21:32.604 ] 00:21:32.604 }' 00:21:32.604 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.862 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.862 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.862 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.862 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.799 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.799 "name": "raid_bdev1", 00:21:33.799 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:33.799 "strip_size_kb": 64, 00:21:33.799 "state": "online", 00:21:33.799 "raid_level": "raid5f", 00:21:33.799 "superblock": true, 00:21:33.799 "num_base_bdevs": 4, 00:21:33.799 "num_base_bdevs_discovered": 4, 00:21:33.799 "num_base_bdevs_operational": 4, 00:21:33.799 "process": { 00:21:33.799 "type": "rebuild", 00:21:33.799 "target": "spare", 00:21:33.799 "progress": { 00:21:33.799 "blocks": 86400, 00:21:33.799 "percent": 45 00:21:33.799 } 00:21:33.799 }, 00:21:33.799 "base_bdevs_list": [ 00:21:33.799 { 00:21:33.800 "name": "spare", 00:21:33.800 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 }, 00:21:33.800 { 00:21:33.800 "name": "BaseBdev2", 00:21:33.800 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 }, 00:21:33.800 { 00:21:33.800 "name": "BaseBdev3", 00:21:33.800 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 }, 00:21:33.800 { 00:21:33.800 "name": "BaseBdev4", 00:21:33.800 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 } 00:21:33.800 ] 00:21:33.800 }' 00:21:33.800 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.800 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.800 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.059 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:34.059 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:35.000 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:35.000 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.000 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.000 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:35.000 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.001 "name": "raid_bdev1", 00:21:35.001 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:35.001 "strip_size_kb": 64, 00:21:35.001 "state": "online", 00:21:35.001 "raid_level": "raid5f", 00:21:35.001 "superblock": true, 00:21:35.001 "num_base_bdevs": 4, 00:21:35.001 "num_base_bdevs_discovered": 4, 00:21:35.001 "num_base_bdevs_operational": 4, 00:21:35.001 "process": { 00:21:35.001 "type": "rebuild", 00:21:35.001 "target": "spare", 00:21:35.001 "progress": { 00:21:35.001 "blocks": 109440, 00:21:35.001 "percent": 57 00:21:35.001 } 00:21:35.001 }, 00:21:35.001 "base_bdevs_list": [ 00:21:35.001 { 00:21:35.001 "name": "spare", 00:21:35.001 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:35.001 "is_configured": true, 00:21:35.001 "data_offset": 2048, 00:21:35.001 "data_size": 63488 00:21:35.001 }, 00:21:35.001 { 00:21:35.001 "name": "BaseBdev2", 00:21:35.001 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:35.001 "is_configured": true, 00:21:35.001 "data_offset": 2048, 00:21:35.001 "data_size": 63488 00:21:35.001 }, 00:21:35.001 { 00:21:35.001 "name": "BaseBdev3", 00:21:35.001 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:35.001 "is_configured": true, 00:21:35.001 "data_offset": 2048, 00:21:35.001 "data_size": 63488 00:21:35.001 }, 00:21:35.001 { 00:21:35.001 "name": "BaseBdev4", 00:21:35.001 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:35.001 "is_configured": true, 00:21:35.001 "data_offset": 2048, 00:21:35.001 "data_size": 63488 00:21:35.001 } 00:21:35.001 ] 00:21:35.001 }' 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.001 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.377 "name": "raid_bdev1", 00:21:36.377 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:36.377 "strip_size_kb": 64, 00:21:36.377 "state": "online", 00:21:36.377 "raid_level": "raid5f", 00:21:36.377 "superblock": true, 00:21:36.377 "num_base_bdevs": 4, 00:21:36.377 "num_base_bdevs_discovered": 4, 00:21:36.377 "num_base_bdevs_operational": 4, 00:21:36.377 "process": { 00:21:36.377 "type": "rebuild", 00:21:36.377 "target": "spare", 00:21:36.377 "progress": { 00:21:36.377 "blocks": 130560, 00:21:36.377 "percent": 68 00:21:36.377 } 00:21:36.377 }, 00:21:36.377 "base_bdevs_list": [ 00:21:36.377 { 00:21:36.377 "name": "spare", 00:21:36.377 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:36.377 "is_configured": true, 00:21:36.377 "data_offset": 2048, 00:21:36.377 "data_size": 63488 00:21:36.377 }, 00:21:36.377 { 00:21:36.377 "name": "BaseBdev2", 00:21:36.377 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:36.377 "is_configured": true, 00:21:36.377 "data_offset": 2048, 00:21:36.377 "data_size": 63488 00:21:36.377 }, 00:21:36.377 { 00:21:36.377 "name": "BaseBdev3", 00:21:36.377 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:36.377 "is_configured": true, 00:21:36.377 "data_offset": 2048, 00:21:36.377 "data_size": 63488 00:21:36.377 }, 00:21:36.377 { 00:21:36.377 "name": "BaseBdev4", 00:21:36.377 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:36.377 "is_configured": true, 00:21:36.377 "data_offset": 2048, 00:21:36.377 "data_size": 63488 00:21:36.377 } 00:21:36.377 ] 00:21:36.377 }' 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.377 14:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.311 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.312 "name": "raid_bdev1", 00:21:37.312 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:37.312 "strip_size_kb": 64, 00:21:37.312 "state": "online", 00:21:37.312 "raid_level": "raid5f", 00:21:37.312 "superblock": true, 00:21:37.312 "num_base_bdevs": 4, 00:21:37.312 "num_base_bdevs_discovered": 4, 00:21:37.312 "num_base_bdevs_operational": 4, 00:21:37.312 "process": { 00:21:37.312 "type": "rebuild", 00:21:37.312 "target": "spare", 00:21:37.312 "progress": { 00:21:37.312 "blocks": 153600, 00:21:37.312 "percent": 80 00:21:37.312 } 00:21:37.312 }, 00:21:37.312 "base_bdevs_list": [ 00:21:37.312 { 00:21:37.312 "name": "spare", 00:21:37.312 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:37.312 "is_configured": true, 00:21:37.312 "data_offset": 2048, 00:21:37.312 "data_size": 63488 00:21:37.312 }, 00:21:37.312 { 00:21:37.312 "name": "BaseBdev2", 00:21:37.312 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:37.312 "is_configured": true, 00:21:37.312 "data_offset": 2048, 00:21:37.312 "data_size": 63488 00:21:37.312 }, 00:21:37.312 { 00:21:37.312 "name": "BaseBdev3", 00:21:37.312 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:37.312 "is_configured": true, 00:21:37.312 "data_offset": 2048, 00:21:37.312 "data_size": 63488 00:21:37.312 }, 00:21:37.312 { 00:21:37.312 "name": "BaseBdev4", 00:21:37.312 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:37.312 "is_configured": true, 00:21:37.312 "data_offset": 2048, 00:21:37.312 "data_size": 63488 00:21:37.312 } 00:21:37.312 ] 00:21:37.312 }' 00:21:37.312 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.312 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.312 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.570 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.570 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.504 "name": "raid_bdev1", 00:21:38.504 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:38.504 "strip_size_kb": 64, 00:21:38.504 "state": "online", 00:21:38.504 "raid_level": "raid5f", 00:21:38.504 "superblock": true, 00:21:38.504 "num_base_bdevs": 4, 00:21:38.504 "num_base_bdevs_discovered": 4, 00:21:38.504 "num_base_bdevs_operational": 4, 00:21:38.504 "process": { 00:21:38.504 "type": "rebuild", 00:21:38.504 "target": "spare", 00:21:38.504 "progress": { 00:21:38.504 "blocks": 176640, 00:21:38.504 "percent": 92 00:21:38.504 } 00:21:38.504 }, 00:21:38.504 "base_bdevs_list": [ 00:21:38.504 { 00:21:38.504 "name": "spare", 00:21:38.504 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:38.504 "is_configured": true, 00:21:38.504 "data_offset": 2048, 00:21:38.504 "data_size": 63488 00:21:38.504 }, 00:21:38.504 { 00:21:38.504 "name": "BaseBdev2", 00:21:38.504 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:38.504 "is_configured": true, 00:21:38.504 "data_offset": 2048, 00:21:38.504 "data_size": 63488 00:21:38.504 }, 00:21:38.504 { 00:21:38.504 "name": "BaseBdev3", 00:21:38.504 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:38.504 "is_configured": true, 00:21:38.504 "data_offset": 2048, 00:21:38.504 "data_size": 63488 00:21:38.504 }, 00:21:38.504 { 00:21:38.504 "name": "BaseBdev4", 00:21:38.504 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:38.504 "is_configured": true, 00:21:38.504 "data_offset": 2048, 00:21:38.504 "data_size": 63488 00:21:38.504 } 00:21:38.504 ] 00:21:38.504 }' 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.504 14:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.763 14:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.763 14:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:39.332 [2024-11-27 14:21:09.661044] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:39.332 [2024-11-27 14:21:09.661168] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:39.332 [2024-11-27 14:21:09.661368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.591 "name": "raid_bdev1", 00:21:39.591 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:39.591 "strip_size_kb": 64, 00:21:39.591 "state": "online", 00:21:39.591 "raid_level": "raid5f", 00:21:39.591 "superblock": true, 00:21:39.591 "num_base_bdevs": 4, 00:21:39.591 "num_base_bdevs_discovered": 4, 00:21:39.591 "num_base_bdevs_operational": 4, 00:21:39.591 "base_bdevs_list": [ 00:21:39.591 { 00:21:39.591 "name": "spare", 00:21:39.591 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:39.591 "is_configured": true, 00:21:39.591 "data_offset": 2048, 00:21:39.591 "data_size": 63488 00:21:39.591 }, 00:21:39.591 { 00:21:39.591 "name": "BaseBdev2", 00:21:39.591 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:39.591 "is_configured": true, 00:21:39.591 "data_offset": 2048, 00:21:39.591 "data_size": 63488 00:21:39.591 }, 00:21:39.591 { 00:21:39.591 "name": "BaseBdev3", 00:21:39.591 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:39.591 "is_configured": true, 00:21:39.591 "data_offset": 2048, 00:21:39.591 "data_size": 63488 00:21:39.591 }, 00:21:39.591 { 00:21:39.591 "name": "BaseBdev4", 00:21:39.591 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:39.591 "is_configured": true, 00:21:39.591 "data_offset": 2048, 00:21:39.591 "data_size": 63488 00:21:39.591 } 00:21:39.591 ] 00:21:39.591 }' 00:21:39.591 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.850 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:39.850 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.850 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:39.850 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:39.850 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.851 "name": "raid_bdev1", 00:21:39.851 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:39.851 "strip_size_kb": 64, 00:21:39.851 "state": "online", 00:21:39.851 "raid_level": "raid5f", 00:21:39.851 "superblock": true, 00:21:39.851 "num_base_bdevs": 4, 00:21:39.851 "num_base_bdevs_discovered": 4, 00:21:39.851 "num_base_bdevs_operational": 4, 00:21:39.851 "base_bdevs_list": [ 00:21:39.851 { 00:21:39.851 "name": "spare", 00:21:39.851 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:39.851 "is_configured": true, 00:21:39.851 "data_offset": 2048, 00:21:39.851 "data_size": 63488 00:21:39.851 }, 00:21:39.851 { 00:21:39.851 "name": "BaseBdev2", 00:21:39.851 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:39.851 "is_configured": true, 00:21:39.851 "data_offset": 2048, 00:21:39.851 "data_size": 63488 00:21:39.851 }, 00:21:39.851 { 00:21:39.851 "name": "BaseBdev3", 00:21:39.851 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:39.851 "is_configured": true, 00:21:39.851 "data_offset": 2048, 00:21:39.851 "data_size": 63488 00:21:39.851 }, 00:21:39.851 { 00:21:39.851 "name": "BaseBdev4", 00:21:39.851 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:39.851 "is_configured": true, 00:21:39.851 "data_offset": 2048, 00:21:39.851 "data_size": 63488 00:21:39.851 } 00:21:39.851 ] 00:21:39.851 }' 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.851 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.110 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.110 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.110 "name": "raid_bdev1", 00:21:40.110 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:40.110 "strip_size_kb": 64, 00:21:40.110 "state": "online", 00:21:40.110 "raid_level": "raid5f", 00:21:40.110 "superblock": true, 00:21:40.110 "num_base_bdevs": 4, 00:21:40.110 "num_base_bdevs_discovered": 4, 00:21:40.110 "num_base_bdevs_operational": 4, 00:21:40.110 "base_bdevs_list": [ 00:21:40.110 { 00:21:40.110 "name": "spare", 00:21:40.110 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:40.110 "is_configured": true, 00:21:40.110 "data_offset": 2048, 00:21:40.110 "data_size": 63488 00:21:40.110 }, 00:21:40.110 { 00:21:40.110 "name": "BaseBdev2", 00:21:40.110 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:40.110 "is_configured": true, 00:21:40.110 "data_offset": 2048, 00:21:40.110 "data_size": 63488 00:21:40.110 }, 00:21:40.110 { 00:21:40.110 "name": "BaseBdev3", 00:21:40.110 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:40.110 "is_configured": true, 00:21:40.110 "data_offset": 2048, 00:21:40.110 "data_size": 63488 00:21:40.110 }, 00:21:40.110 { 00:21:40.110 "name": "BaseBdev4", 00:21:40.110 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:40.110 "is_configured": true, 00:21:40.110 "data_offset": 2048, 00:21:40.110 "data_size": 63488 00:21:40.110 } 00:21:40.110 ] 00:21:40.110 }' 00:21:40.110 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.110 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.368 [2024-11-27 14:21:10.860639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.368 [2024-11-27 14:21:10.860805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.368 [2024-11-27 14:21:10.861065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.368 [2024-11-27 14:21:10.861204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.368 [2024-11-27 14:21:10.861235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:40.368 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.625 14:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:40.882 /dev/nbd0 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:40.882 1+0 records in 00:21:40.882 1+0 records out 00:21:40.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298703 s, 13.7 MB/s 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.882 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:41.140 /dev/nbd1 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.140 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.398 1+0 records in 00:21:41.398 1+0 records out 00:21:41.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978142 s, 4.2 MB/s 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.398 14:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.964 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 [2024-11-27 14:21:12.543123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.222 [2024-11-27 14:21:12.543190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.222 [2024-11-27 14:21:12.543223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:42.222 [2024-11-27 14:21:12.543238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.222 [2024-11-27 14:21:12.546346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.222 [2024-11-27 14:21:12.546392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.222 [2024-11-27 14:21:12.546524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:42.222 [2024-11-27 14:21:12.546610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.222 [2024-11-27 14:21:12.546810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.222 [2024-11-27 14:21:12.546994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:42.222 [2024-11-27 14:21:12.547125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:42.222 spare 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 [2024-11-27 14:21:12.647327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:42.222 [2024-11-27 14:21:12.647440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:42.222 [2024-11-27 14:21:12.648047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:21:42.222 [2024-11-27 14:21:12.655067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:42.222 [2024-11-27 14:21:12.655099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:42.222 [2024-11-27 14:21:12.655366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.222 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.222 "name": "raid_bdev1", 00:21:42.222 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:42.222 "strip_size_kb": 64, 00:21:42.222 "state": "online", 00:21:42.222 "raid_level": "raid5f", 00:21:42.222 "superblock": true, 00:21:42.222 "num_base_bdevs": 4, 00:21:42.222 "num_base_bdevs_discovered": 4, 00:21:42.222 "num_base_bdevs_operational": 4, 00:21:42.222 "base_bdevs_list": [ 00:21:42.222 { 00:21:42.222 "name": "spare", 00:21:42.222 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:42.222 "is_configured": true, 00:21:42.223 "data_offset": 2048, 00:21:42.223 "data_size": 63488 00:21:42.223 }, 00:21:42.223 { 00:21:42.223 "name": "BaseBdev2", 00:21:42.223 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:42.223 "is_configured": true, 00:21:42.223 "data_offset": 2048, 00:21:42.223 "data_size": 63488 00:21:42.223 }, 00:21:42.223 { 00:21:42.223 "name": "BaseBdev3", 00:21:42.223 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:42.223 "is_configured": true, 00:21:42.223 "data_offset": 2048, 00:21:42.223 "data_size": 63488 00:21:42.223 }, 00:21:42.223 { 00:21:42.223 "name": "BaseBdev4", 00:21:42.223 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:42.223 "is_configured": true, 00:21:42.223 "data_offset": 2048, 00:21:42.223 "data_size": 63488 00:21:42.223 } 00:21:42.223 ] 00:21:42.223 }' 00:21:42.223 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.223 14:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.789 "name": "raid_bdev1", 00:21:42.789 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:42.789 "strip_size_kb": 64, 00:21:42.789 "state": "online", 00:21:42.789 "raid_level": "raid5f", 00:21:42.789 "superblock": true, 00:21:42.789 "num_base_bdevs": 4, 00:21:42.789 "num_base_bdevs_discovered": 4, 00:21:42.789 "num_base_bdevs_operational": 4, 00:21:42.789 "base_bdevs_list": [ 00:21:42.789 { 00:21:42.789 "name": "spare", 00:21:42.789 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:42.789 "is_configured": true, 00:21:42.789 "data_offset": 2048, 00:21:42.789 "data_size": 63488 00:21:42.789 }, 00:21:42.789 { 00:21:42.789 "name": "BaseBdev2", 00:21:42.789 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:42.789 "is_configured": true, 00:21:42.789 "data_offset": 2048, 00:21:42.789 "data_size": 63488 00:21:42.789 }, 00:21:42.789 { 00:21:42.789 "name": "BaseBdev3", 00:21:42.789 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:42.789 "is_configured": true, 00:21:42.789 "data_offset": 2048, 00:21:42.789 "data_size": 63488 00:21:42.789 }, 00:21:42.789 { 00:21:42.789 "name": "BaseBdev4", 00:21:42.789 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:42.789 "is_configured": true, 00:21:42.789 "data_offset": 2048, 00:21:42.789 "data_size": 63488 00:21:42.789 } 00:21:42.789 ] 00:21:42.789 }' 00:21:42.789 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.046 [2024-11-27 14:21:13.423069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.046 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.047 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.047 "name": "raid_bdev1", 00:21:43.047 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:43.047 "strip_size_kb": 64, 00:21:43.047 "state": "online", 00:21:43.047 "raid_level": "raid5f", 00:21:43.047 "superblock": true, 00:21:43.047 "num_base_bdevs": 4, 00:21:43.047 "num_base_bdevs_discovered": 3, 00:21:43.047 "num_base_bdevs_operational": 3, 00:21:43.047 "base_bdevs_list": [ 00:21:43.047 { 00:21:43.047 "name": null, 00:21:43.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.047 "is_configured": false, 00:21:43.047 "data_offset": 0, 00:21:43.047 "data_size": 63488 00:21:43.047 }, 00:21:43.047 { 00:21:43.047 "name": "BaseBdev2", 00:21:43.047 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:43.047 "is_configured": true, 00:21:43.047 "data_offset": 2048, 00:21:43.047 "data_size": 63488 00:21:43.047 }, 00:21:43.047 { 00:21:43.047 "name": "BaseBdev3", 00:21:43.047 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:43.047 "is_configured": true, 00:21:43.047 "data_offset": 2048, 00:21:43.047 "data_size": 63488 00:21:43.047 }, 00:21:43.047 { 00:21:43.047 "name": "BaseBdev4", 00:21:43.047 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:43.047 "is_configured": true, 00:21:43.047 "data_offset": 2048, 00:21:43.047 "data_size": 63488 00:21:43.047 } 00:21:43.047 ] 00:21:43.047 }' 00:21:43.047 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.047 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.611 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.611 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.611 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.611 [2024-11-27 14:21:13.975279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.612 [2024-11-27 14:21:13.975524] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:43.612 [2024-11-27 14:21:13.975556] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:43.612 [2024-11-27 14:21:13.975612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.612 [2024-11-27 14:21:13.989126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:21:43.612 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.612 14:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:43.612 [2024-11-27 14:21:13.998056] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.548 14:21:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.548 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.548 "name": "raid_bdev1", 00:21:44.548 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:44.548 "strip_size_kb": 64, 00:21:44.548 "state": "online", 00:21:44.548 "raid_level": "raid5f", 00:21:44.548 "superblock": true, 00:21:44.548 "num_base_bdevs": 4, 00:21:44.548 "num_base_bdevs_discovered": 4, 00:21:44.548 "num_base_bdevs_operational": 4, 00:21:44.548 "process": { 00:21:44.548 "type": "rebuild", 00:21:44.548 "target": "spare", 00:21:44.548 "progress": { 00:21:44.548 "blocks": 17280, 00:21:44.548 "percent": 9 00:21:44.548 } 00:21:44.548 }, 00:21:44.548 "base_bdevs_list": [ 00:21:44.548 { 00:21:44.548 "name": "spare", 00:21:44.548 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:44.548 "is_configured": true, 00:21:44.548 "data_offset": 2048, 00:21:44.548 "data_size": 63488 00:21:44.548 }, 00:21:44.548 { 00:21:44.548 "name": "BaseBdev2", 00:21:44.548 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:44.548 "is_configured": true, 00:21:44.548 "data_offset": 2048, 00:21:44.548 "data_size": 63488 00:21:44.548 }, 00:21:44.548 { 00:21:44.548 "name": "BaseBdev3", 00:21:44.548 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:44.548 "is_configured": true, 00:21:44.548 "data_offset": 2048, 00:21:44.548 "data_size": 63488 00:21:44.548 }, 00:21:44.548 { 00:21:44.548 "name": "BaseBdev4", 00:21:44.548 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:44.548 "is_configured": true, 00:21:44.548 "data_offset": 2048, 00:21:44.548 "data_size": 63488 00:21:44.548 } 00:21:44.548 ] 00:21:44.548 }' 00:21:44.548 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.814 [2024-11-27 14:21:15.155441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.814 [2024-11-27 14:21:15.211421] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.814 [2024-11-27 14:21:15.211519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.814 [2024-11-27 14:21:15.211556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.814 [2024-11-27 14:21:15.211574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:44.814 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.815 "name": "raid_bdev1", 00:21:44.815 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:44.815 "strip_size_kb": 64, 00:21:44.815 "state": "online", 00:21:44.815 "raid_level": "raid5f", 00:21:44.815 "superblock": true, 00:21:44.815 "num_base_bdevs": 4, 00:21:44.815 "num_base_bdevs_discovered": 3, 00:21:44.815 "num_base_bdevs_operational": 3, 00:21:44.815 "base_bdevs_list": [ 00:21:44.815 { 00:21:44.815 "name": null, 00:21:44.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.815 "is_configured": false, 00:21:44.815 "data_offset": 0, 00:21:44.815 "data_size": 63488 00:21:44.815 }, 00:21:44.815 { 00:21:44.815 "name": "BaseBdev2", 00:21:44.815 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:44.815 "is_configured": true, 00:21:44.815 "data_offset": 2048, 00:21:44.815 "data_size": 63488 00:21:44.815 }, 00:21:44.815 { 00:21:44.815 "name": "BaseBdev3", 00:21:44.815 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:44.815 "is_configured": true, 00:21:44.815 "data_offset": 2048, 00:21:44.815 "data_size": 63488 00:21:44.815 }, 00:21:44.815 { 00:21:44.815 "name": "BaseBdev4", 00:21:44.815 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:44.815 "is_configured": true, 00:21:44.815 "data_offset": 2048, 00:21:44.815 "data_size": 63488 00:21:44.815 } 00:21:44.815 ] 00:21:44.815 }' 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.815 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.399 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:45.399 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.399 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.399 [2024-11-27 14:21:15.742690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:45.399 [2024-11-27 14:21:15.742775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.399 [2024-11-27 14:21:15.742811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:45.399 [2024-11-27 14:21:15.742847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.399 [2024-11-27 14:21:15.743451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.399 [2024-11-27 14:21:15.743499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:45.399 [2024-11-27 14:21:15.743617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:45.399 [2024-11-27 14:21:15.743641] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:45.399 [2024-11-27 14:21:15.743654] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:45.399 [2024-11-27 14:21:15.743699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.399 [2024-11-27 14:21:15.756883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:21:45.399 spare 00:21:45.399 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.399 14:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:45.399 [2024-11-27 14:21:15.765659] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.334 "name": "raid_bdev1", 00:21:46.334 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:46.334 "strip_size_kb": 64, 00:21:46.334 "state": "online", 00:21:46.334 "raid_level": "raid5f", 00:21:46.334 "superblock": true, 00:21:46.334 "num_base_bdevs": 4, 00:21:46.334 "num_base_bdevs_discovered": 4, 00:21:46.334 "num_base_bdevs_operational": 4, 00:21:46.334 "process": { 00:21:46.334 "type": "rebuild", 00:21:46.334 "target": "spare", 00:21:46.334 "progress": { 00:21:46.334 "blocks": 17280, 00:21:46.334 "percent": 9 00:21:46.334 } 00:21:46.334 }, 00:21:46.334 "base_bdevs_list": [ 00:21:46.334 { 00:21:46.334 "name": "spare", 00:21:46.334 "uuid": "3df43183-d66f-583f-98f7-d81cc135673e", 00:21:46.334 "is_configured": true, 00:21:46.334 "data_offset": 2048, 00:21:46.334 "data_size": 63488 00:21:46.334 }, 00:21:46.334 { 00:21:46.334 "name": "BaseBdev2", 00:21:46.334 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:46.334 "is_configured": true, 00:21:46.334 "data_offset": 2048, 00:21:46.334 "data_size": 63488 00:21:46.334 }, 00:21:46.334 { 00:21:46.334 "name": "BaseBdev3", 00:21:46.334 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:46.334 "is_configured": true, 00:21:46.334 "data_offset": 2048, 00:21:46.334 "data_size": 63488 00:21:46.334 }, 00:21:46.334 { 00:21:46.334 "name": "BaseBdev4", 00:21:46.334 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:46.334 "is_configured": true, 00:21:46.334 "data_offset": 2048, 00:21:46.334 "data_size": 63488 00:21:46.334 } 00:21:46.334 ] 00:21:46.334 }' 00:21:46.334 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.592 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.592 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.592 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.592 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:46.592 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.592 14:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.592 [2024-11-27 14:21:16.914868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.592 [2024-11-27 14:21:16.978835] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:46.592 [2024-11-27 14:21:16.978922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.592 [2024-11-27 14:21:16.978957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.592 [2024-11-27 14:21:16.978969] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.592 "name": "raid_bdev1", 00:21:46.592 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:46.592 "strip_size_kb": 64, 00:21:46.592 "state": "online", 00:21:46.592 "raid_level": "raid5f", 00:21:46.592 "superblock": true, 00:21:46.592 "num_base_bdevs": 4, 00:21:46.592 "num_base_bdevs_discovered": 3, 00:21:46.592 "num_base_bdevs_operational": 3, 00:21:46.592 "base_bdevs_list": [ 00:21:46.592 { 00:21:46.592 "name": null, 00:21:46.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.592 "is_configured": false, 00:21:46.592 "data_offset": 0, 00:21:46.592 "data_size": 63488 00:21:46.592 }, 00:21:46.592 { 00:21:46.592 "name": "BaseBdev2", 00:21:46.592 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:46.592 "is_configured": true, 00:21:46.592 "data_offset": 2048, 00:21:46.592 "data_size": 63488 00:21:46.592 }, 00:21:46.592 { 00:21:46.592 "name": "BaseBdev3", 00:21:46.592 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:46.592 "is_configured": true, 00:21:46.592 "data_offset": 2048, 00:21:46.592 "data_size": 63488 00:21:46.592 }, 00:21:46.592 { 00:21:46.592 "name": "BaseBdev4", 00:21:46.592 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:46.592 "is_configured": true, 00:21:46.592 "data_offset": 2048, 00:21:46.592 "data_size": 63488 00:21:46.592 } 00:21:46.592 ] 00:21:46.592 }' 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.592 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.159 "name": "raid_bdev1", 00:21:47.159 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:47.159 "strip_size_kb": 64, 00:21:47.159 "state": "online", 00:21:47.159 "raid_level": "raid5f", 00:21:47.159 "superblock": true, 00:21:47.159 "num_base_bdevs": 4, 00:21:47.159 "num_base_bdevs_discovered": 3, 00:21:47.159 "num_base_bdevs_operational": 3, 00:21:47.159 "base_bdevs_list": [ 00:21:47.159 { 00:21:47.159 "name": null, 00:21:47.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.159 "is_configured": false, 00:21:47.159 "data_offset": 0, 00:21:47.159 "data_size": 63488 00:21:47.159 }, 00:21:47.159 { 00:21:47.159 "name": "BaseBdev2", 00:21:47.159 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:47.159 "is_configured": true, 00:21:47.159 "data_offset": 2048, 00:21:47.159 "data_size": 63488 00:21:47.159 }, 00:21:47.159 { 00:21:47.159 "name": "BaseBdev3", 00:21:47.159 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:47.159 "is_configured": true, 00:21:47.159 "data_offset": 2048, 00:21:47.159 "data_size": 63488 00:21:47.159 }, 00:21:47.159 { 00:21:47.159 "name": "BaseBdev4", 00:21:47.159 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:47.159 "is_configured": true, 00:21:47.159 "data_offset": 2048, 00:21:47.159 "data_size": 63488 00:21:47.159 } 00:21:47.159 ] 00:21:47.159 }' 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.159 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.418 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.418 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:47.418 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.418 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.418 [2024-11-27 14:21:17.677997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:47.418 [2024-11-27 14:21:17.678065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.418 [2024-11-27 14:21:17.678098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:47.418 [2024-11-27 14:21:17.678113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.418 [2024-11-27 14:21:17.678729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.418 [2024-11-27 14:21:17.678764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:47.418 [2024-11-27 14:21:17.678882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:47.418 [2024-11-27 14:21:17.678915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:47.418 [2024-11-27 14:21:17.678933] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:47.418 [2024-11-27 14:21:17.678946] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:47.418 BaseBdev1 00:21:47.418 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.418 14:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.354 "name": "raid_bdev1", 00:21:48.354 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:48.354 "strip_size_kb": 64, 00:21:48.354 "state": "online", 00:21:48.354 "raid_level": "raid5f", 00:21:48.354 "superblock": true, 00:21:48.354 "num_base_bdevs": 4, 00:21:48.354 "num_base_bdevs_discovered": 3, 00:21:48.354 "num_base_bdevs_operational": 3, 00:21:48.354 "base_bdevs_list": [ 00:21:48.354 { 00:21:48.354 "name": null, 00:21:48.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.354 "is_configured": false, 00:21:48.354 "data_offset": 0, 00:21:48.354 "data_size": 63488 00:21:48.354 }, 00:21:48.354 { 00:21:48.354 "name": "BaseBdev2", 00:21:48.354 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:48.354 "is_configured": true, 00:21:48.354 "data_offset": 2048, 00:21:48.354 "data_size": 63488 00:21:48.354 }, 00:21:48.354 { 00:21:48.354 "name": "BaseBdev3", 00:21:48.354 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:48.354 "is_configured": true, 00:21:48.354 "data_offset": 2048, 00:21:48.354 "data_size": 63488 00:21:48.354 }, 00:21:48.354 { 00:21:48.354 "name": "BaseBdev4", 00:21:48.354 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:48.354 "is_configured": true, 00:21:48.354 "data_offset": 2048, 00:21:48.354 "data_size": 63488 00:21:48.354 } 00:21:48.354 ] 00:21:48.354 }' 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.354 14:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.919 "name": "raid_bdev1", 00:21:48.919 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:48.919 "strip_size_kb": 64, 00:21:48.919 "state": "online", 00:21:48.919 "raid_level": "raid5f", 00:21:48.919 "superblock": true, 00:21:48.919 "num_base_bdevs": 4, 00:21:48.919 "num_base_bdevs_discovered": 3, 00:21:48.919 "num_base_bdevs_operational": 3, 00:21:48.919 "base_bdevs_list": [ 00:21:48.919 { 00:21:48.919 "name": null, 00:21:48.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.919 "is_configured": false, 00:21:48.919 "data_offset": 0, 00:21:48.919 "data_size": 63488 00:21:48.919 }, 00:21:48.919 { 00:21:48.919 "name": "BaseBdev2", 00:21:48.919 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:48.919 "is_configured": true, 00:21:48.919 "data_offset": 2048, 00:21:48.919 "data_size": 63488 00:21:48.919 }, 00:21:48.919 { 00:21:48.919 "name": "BaseBdev3", 00:21:48.919 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:48.919 "is_configured": true, 00:21:48.919 "data_offset": 2048, 00:21:48.919 "data_size": 63488 00:21:48.919 }, 00:21:48.919 { 00:21:48.919 "name": "BaseBdev4", 00:21:48.919 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:48.919 "is_configured": true, 00:21:48.919 "data_offset": 2048, 00:21:48.919 "data_size": 63488 00:21:48.919 } 00:21:48.919 ] 00:21:48.919 }' 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.919 [2024-11-27 14:21:19.346524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.919 [2024-11-27 14:21:19.346746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:48.919 [2024-11-27 14:21:19.346769] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:48.919 request: 00:21:48.919 { 00:21:48.919 "base_bdev": "BaseBdev1", 00:21:48.919 "raid_bdev": "raid_bdev1", 00:21:48.919 "method": "bdev_raid_add_base_bdev", 00:21:48.919 "req_id": 1 00:21:48.919 } 00:21:48.919 Got JSON-RPC error response 00:21:48.919 response: 00:21:48.919 { 00:21:48.919 "code": -22, 00:21:48.919 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:48.919 } 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.919 14:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.853 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.111 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.111 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.111 "name": "raid_bdev1", 00:21:50.111 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:50.111 "strip_size_kb": 64, 00:21:50.111 "state": "online", 00:21:50.111 "raid_level": "raid5f", 00:21:50.111 "superblock": true, 00:21:50.111 "num_base_bdevs": 4, 00:21:50.111 "num_base_bdevs_discovered": 3, 00:21:50.111 "num_base_bdevs_operational": 3, 00:21:50.111 "base_bdevs_list": [ 00:21:50.111 { 00:21:50.111 "name": null, 00:21:50.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.111 "is_configured": false, 00:21:50.111 "data_offset": 0, 00:21:50.111 "data_size": 63488 00:21:50.111 }, 00:21:50.111 { 00:21:50.111 "name": "BaseBdev2", 00:21:50.111 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:50.111 "is_configured": true, 00:21:50.111 "data_offset": 2048, 00:21:50.111 "data_size": 63488 00:21:50.111 }, 00:21:50.111 { 00:21:50.111 "name": "BaseBdev3", 00:21:50.111 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:50.111 "is_configured": true, 00:21:50.111 "data_offset": 2048, 00:21:50.111 "data_size": 63488 00:21:50.111 }, 00:21:50.111 { 00:21:50.111 "name": "BaseBdev4", 00:21:50.111 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:50.111 "is_configured": true, 00:21:50.111 "data_offset": 2048, 00:21:50.111 "data_size": 63488 00:21:50.111 } 00:21:50.111 ] 00:21:50.111 }' 00:21:50.111 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.111 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.370 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.629 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.629 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.629 "name": "raid_bdev1", 00:21:50.629 "uuid": "85edf401-ddea-47ba-8da7-674b302c5b7f", 00:21:50.629 "strip_size_kb": 64, 00:21:50.629 "state": "online", 00:21:50.629 "raid_level": "raid5f", 00:21:50.629 "superblock": true, 00:21:50.629 "num_base_bdevs": 4, 00:21:50.629 "num_base_bdevs_discovered": 3, 00:21:50.629 "num_base_bdevs_operational": 3, 00:21:50.629 "base_bdevs_list": [ 00:21:50.629 { 00:21:50.629 "name": null, 00:21:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.629 "is_configured": false, 00:21:50.629 "data_offset": 0, 00:21:50.629 "data_size": 63488 00:21:50.629 }, 00:21:50.629 { 00:21:50.629 "name": "BaseBdev2", 00:21:50.629 "uuid": "718082f3-6873-5301-8b93-0ae03b4e9765", 00:21:50.629 "is_configured": true, 00:21:50.629 "data_offset": 2048, 00:21:50.629 "data_size": 63488 00:21:50.629 }, 00:21:50.629 { 00:21:50.629 "name": "BaseBdev3", 00:21:50.629 "uuid": "6a2f78cb-5c87-5bdf-8853-f47c724acdbf", 00:21:50.629 "is_configured": true, 00:21:50.629 "data_offset": 2048, 00:21:50.629 "data_size": 63488 00:21:50.629 }, 00:21:50.629 { 00:21:50.629 "name": "BaseBdev4", 00:21:50.629 "uuid": "ccb08789-49dd-5292-87b2-02b1ecfdece1", 00:21:50.629 "is_configured": true, 00:21:50.629 "data_offset": 2048, 00:21:50.629 "data_size": 63488 00:21:50.629 } 00:21:50.629 ] 00:21:50.629 }' 00:21:50.629 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.629 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:50.629 14:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85759 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85759 ']' 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85759 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85759 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.629 killing process with pid 85759 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85759' 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85759 00:21:50.629 Received shutdown signal, test time was about 60.000000 seconds 00:21:50.629 00:21:50.629 Latency(us) 00:21:50.629 [2024-11-27T14:21:21.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.629 [2024-11-27T14:21:21.142Z] =================================================================================================================== 00:21:50.629 [2024-11-27T14:21:21.142Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:50.629 [2024-11-27 14:21:21.040588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:50.629 [2024-11-27 14:21:21.040744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.629 14:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85759 00:21:50.629 [2024-11-27 14:21:21.040870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.629 [2024-11-27 14:21:21.040901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:51.196 [2024-11-27 14:21:21.478337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.131 14:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:52.131 00:21:52.131 real 0m29.255s 00:21:52.131 user 0m38.165s 00:21:52.131 sys 0m3.030s 00:21:52.131 14:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.131 14:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.131 ************************************ 00:21:52.131 END TEST raid5f_rebuild_test_sb 00:21:52.132 ************************************ 00:21:52.132 14:21:22 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:21:52.132 14:21:22 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:52.132 14:21:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:52.132 14:21:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.132 14:21:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.132 ************************************ 00:21:52.132 START TEST raid_state_function_test_sb_4k 00:21:52.132 ************************************ 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86592 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86592' 00:21:52.132 Process raid pid: 86592 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86592 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86592 ']' 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.132 14:21:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.392 [2024-11-27 14:21:22.710283] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:21:52.392 [2024-11-27 14:21:22.710664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.392 [2024-11-27 14:21:22.891451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.651 [2024-11-27 14:21:23.026412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.910 [2024-11-27 14:21:23.234489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.910 [2024-11-27 14:21:23.234543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.168 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.168 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:53.168 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:53.168 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.169 [2024-11-27 14:21:23.651173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.169 [2024-11-27 14:21:23.651389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.169 [2024-11-27 14:21:23.651535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.169 [2024-11-27 14:21:23.651570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.169 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.427 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.427 "name": "Existed_Raid", 00:21:53.427 "uuid": "01e46cfc-e099-433f-b6f7-c62178d071c9", 00:21:53.427 "strip_size_kb": 0, 00:21:53.427 "state": "configuring", 00:21:53.427 "raid_level": "raid1", 00:21:53.427 "superblock": true, 00:21:53.427 "num_base_bdevs": 2, 00:21:53.427 "num_base_bdevs_discovered": 0, 00:21:53.427 "num_base_bdevs_operational": 2, 00:21:53.427 "base_bdevs_list": [ 00:21:53.427 { 00:21:53.427 "name": "BaseBdev1", 00:21:53.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.427 "is_configured": false, 00:21:53.427 "data_offset": 0, 00:21:53.427 "data_size": 0 00:21:53.427 }, 00:21:53.427 { 00:21:53.427 "name": "BaseBdev2", 00:21:53.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.427 "is_configured": false, 00:21:53.427 "data_offset": 0, 00:21:53.427 "data_size": 0 00:21:53.427 } 00:21:53.427 ] 00:21:53.427 }' 00:21:53.427 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.428 14:21:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.685 [2024-11-27 14:21:24.175283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.685 [2024-11-27 14:21:24.175482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.685 [2024-11-27 14:21:24.183243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.685 [2024-11-27 14:21:24.183411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.685 [2024-11-27 14:21:24.183537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.685 [2024-11-27 14:21:24.183599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.685 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.944 BaseBdev1 00:21:53.944 [2024-11-27 14:21:24.228252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.944 [ 00:21:53.944 { 00:21:53.944 "name": "BaseBdev1", 00:21:53.944 "aliases": [ 00:21:53.944 "924543b3-8eda-48a4-8233-06331e78d205" 00:21:53.944 ], 00:21:53.944 "product_name": "Malloc disk", 00:21:53.944 "block_size": 4096, 00:21:53.944 "num_blocks": 8192, 00:21:53.944 "uuid": "924543b3-8eda-48a4-8233-06331e78d205", 00:21:53.944 "assigned_rate_limits": { 00:21:53.944 "rw_ios_per_sec": 0, 00:21:53.944 "rw_mbytes_per_sec": 0, 00:21:53.944 "r_mbytes_per_sec": 0, 00:21:53.944 "w_mbytes_per_sec": 0 00:21:53.944 }, 00:21:53.944 "claimed": true, 00:21:53.944 "claim_type": "exclusive_write", 00:21:53.944 "zoned": false, 00:21:53.944 "supported_io_types": { 00:21:53.944 "read": true, 00:21:53.944 "write": true, 00:21:53.944 "unmap": true, 00:21:53.944 "flush": true, 00:21:53.944 "reset": true, 00:21:53.944 "nvme_admin": false, 00:21:53.944 "nvme_io": false, 00:21:53.944 "nvme_io_md": false, 00:21:53.944 "write_zeroes": true, 00:21:53.944 "zcopy": true, 00:21:53.944 "get_zone_info": false, 00:21:53.944 "zone_management": false, 00:21:53.944 "zone_append": false, 00:21:53.944 "compare": false, 00:21:53.944 "compare_and_write": false, 00:21:53.944 "abort": true, 00:21:53.944 "seek_hole": false, 00:21:53.944 "seek_data": false, 00:21:53.944 "copy": true, 00:21:53.944 "nvme_iov_md": false 00:21:53.944 }, 00:21:53.944 "memory_domains": [ 00:21:53.944 { 00:21:53.944 "dma_device_id": "system", 00:21:53.944 "dma_device_type": 1 00:21:53.944 }, 00:21:53.944 { 00:21:53.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.944 "dma_device_type": 2 00:21:53.944 } 00:21:53.944 ], 00:21:53.944 "driver_specific": {} 00:21:53.944 } 00:21:53.944 ] 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.944 "name": "Existed_Raid", 00:21:53.944 "uuid": "fde46cc2-608a-4c86-b7a6-b22e3a9b3488", 00:21:53.944 "strip_size_kb": 0, 00:21:53.944 "state": "configuring", 00:21:53.944 "raid_level": "raid1", 00:21:53.944 "superblock": true, 00:21:53.944 "num_base_bdevs": 2, 00:21:53.944 "num_base_bdevs_discovered": 1, 00:21:53.944 "num_base_bdevs_operational": 2, 00:21:53.944 "base_bdevs_list": [ 00:21:53.944 { 00:21:53.944 "name": "BaseBdev1", 00:21:53.944 "uuid": "924543b3-8eda-48a4-8233-06331e78d205", 00:21:53.944 "is_configured": true, 00:21:53.944 "data_offset": 256, 00:21:53.944 "data_size": 7936 00:21:53.944 }, 00:21:53.944 { 00:21:53.944 "name": "BaseBdev2", 00:21:53.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.944 "is_configured": false, 00:21:53.944 "data_offset": 0, 00:21:53.944 "data_size": 0 00:21:53.944 } 00:21:53.944 ] 00:21:53.944 }' 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.944 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.526 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:54.526 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 [2024-11-27 14:21:24.816476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.527 [2024-11-27 14:21:24.817449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 [2024-11-27 14:21:24.824505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.527 [2024-11-27 14:21:24.827049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.527 [2024-11-27 14:21:24.827212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.527 "name": "Existed_Raid", 00:21:54.527 "uuid": "54575daa-bcaf-4cd7-ad10-eb9b6182aa67", 00:21:54.527 "strip_size_kb": 0, 00:21:54.527 "state": "configuring", 00:21:54.527 "raid_level": "raid1", 00:21:54.527 "superblock": true, 00:21:54.527 "num_base_bdevs": 2, 00:21:54.527 "num_base_bdevs_discovered": 1, 00:21:54.527 "num_base_bdevs_operational": 2, 00:21:54.527 "base_bdevs_list": [ 00:21:54.527 { 00:21:54.527 "name": "BaseBdev1", 00:21:54.527 "uuid": "924543b3-8eda-48a4-8233-06331e78d205", 00:21:54.527 "is_configured": true, 00:21:54.527 "data_offset": 256, 00:21:54.527 "data_size": 7936 00:21:54.527 }, 00:21:54.527 { 00:21:54.527 "name": "BaseBdev2", 00:21:54.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.527 "is_configured": false, 00:21:54.527 "data_offset": 0, 00:21:54.527 "data_size": 0 00:21:54.527 } 00:21:54.527 ] 00:21:54.527 }' 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.527 14:21:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.095 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:21:55.095 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.095 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.095 [2024-11-27 14:21:25.419561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.095 [2024-11-27 14:21:25.419920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:55.095 [2024-11-27 14:21:25.419941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:55.095 [2024-11-27 14:21:25.420262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:55.095 BaseBdev2 00:21:55.095 [2024-11-27 14:21:25.420504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:55.095 [2024-11-27 14:21:25.420530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:55.095 [2024-11-27 14:21:25.420714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.095 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.095 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:55.095 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.096 [ 00:21:55.096 { 00:21:55.096 "name": "BaseBdev2", 00:21:55.096 "aliases": [ 00:21:55.096 "db439270-5816-413a-8477-e593dfe149f3" 00:21:55.096 ], 00:21:55.096 "product_name": "Malloc disk", 00:21:55.096 "block_size": 4096, 00:21:55.096 "num_blocks": 8192, 00:21:55.096 "uuid": "db439270-5816-413a-8477-e593dfe149f3", 00:21:55.096 "assigned_rate_limits": { 00:21:55.096 "rw_ios_per_sec": 0, 00:21:55.096 "rw_mbytes_per_sec": 0, 00:21:55.096 "r_mbytes_per_sec": 0, 00:21:55.096 "w_mbytes_per_sec": 0 00:21:55.096 }, 00:21:55.096 "claimed": true, 00:21:55.096 "claim_type": "exclusive_write", 00:21:55.096 "zoned": false, 00:21:55.096 "supported_io_types": { 00:21:55.096 "read": true, 00:21:55.096 "write": true, 00:21:55.096 "unmap": true, 00:21:55.096 "flush": true, 00:21:55.096 "reset": true, 00:21:55.096 "nvme_admin": false, 00:21:55.096 "nvme_io": false, 00:21:55.096 "nvme_io_md": false, 00:21:55.096 "write_zeroes": true, 00:21:55.096 "zcopy": true, 00:21:55.096 "get_zone_info": false, 00:21:55.096 "zone_management": false, 00:21:55.096 "zone_append": false, 00:21:55.096 "compare": false, 00:21:55.096 "compare_and_write": false, 00:21:55.096 "abort": true, 00:21:55.096 "seek_hole": false, 00:21:55.096 "seek_data": false, 00:21:55.096 "copy": true, 00:21:55.096 "nvme_iov_md": false 00:21:55.096 }, 00:21:55.096 "memory_domains": [ 00:21:55.096 { 00:21:55.096 "dma_device_id": "system", 00:21:55.096 "dma_device_type": 1 00:21:55.096 }, 00:21:55.096 { 00:21:55.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.096 "dma_device_type": 2 00:21:55.096 } 00:21:55.096 ], 00:21:55.096 "driver_specific": {} 00:21:55.096 } 00:21:55.096 ] 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.096 "name": "Existed_Raid", 00:21:55.096 "uuid": "54575daa-bcaf-4cd7-ad10-eb9b6182aa67", 00:21:55.096 "strip_size_kb": 0, 00:21:55.096 "state": "online", 00:21:55.096 "raid_level": "raid1", 00:21:55.096 "superblock": true, 00:21:55.096 "num_base_bdevs": 2, 00:21:55.096 "num_base_bdevs_discovered": 2, 00:21:55.096 "num_base_bdevs_operational": 2, 00:21:55.096 "base_bdevs_list": [ 00:21:55.096 { 00:21:55.096 "name": "BaseBdev1", 00:21:55.096 "uuid": "924543b3-8eda-48a4-8233-06331e78d205", 00:21:55.096 "is_configured": true, 00:21:55.096 "data_offset": 256, 00:21:55.096 "data_size": 7936 00:21:55.096 }, 00:21:55.096 { 00:21:55.096 "name": "BaseBdev2", 00:21:55.096 "uuid": "db439270-5816-413a-8477-e593dfe149f3", 00:21:55.096 "is_configured": true, 00:21:55.096 "data_offset": 256, 00:21:55.096 "data_size": 7936 00:21:55.096 } 00:21:55.096 ] 00:21:55.096 }' 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.096 14:21:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:55.663 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:55.664 [2024-11-27 14:21:26.016160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.664 "name": "Existed_Raid", 00:21:55.664 "aliases": [ 00:21:55.664 "54575daa-bcaf-4cd7-ad10-eb9b6182aa67" 00:21:55.664 ], 00:21:55.664 "product_name": "Raid Volume", 00:21:55.664 "block_size": 4096, 00:21:55.664 "num_blocks": 7936, 00:21:55.664 "uuid": "54575daa-bcaf-4cd7-ad10-eb9b6182aa67", 00:21:55.664 "assigned_rate_limits": { 00:21:55.664 "rw_ios_per_sec": 0, 00:21:55.664 "rw_mbytes_per_sec": 0, 00:21:55.664 "r_mbytes_per_sec": 0, 00:21:55.664 "w_mbytes_per_sec": 0 00:21:55.664 }, 00:21:55.664 "claimed": false, 00:21:55.664 "zoned": false, 00:21:55.664 "supported_io_types": { 00:21:55.664 "read": true, 00:21:55.664 "write": true, 00:21:55.664 "unmap": false, 00:21:55.664 "flush": false, 00:21:55.664 "reset": true, 00:21:55.664 "nvme_admin": false, 00:21:55.664 "nvme_io": false, 00:21:55.664 "nvme_io_md": false, 00:21:55.664 "write_zeroes": true, 00:21:55.664 "zcopy": false, 00:21:55.664 "get_zone_info": false, 00:21:55.664 "zone_management": false, 00:21:55.664 "zone_append": false, 00:21:55.664 "compare": false, 00:21:55.664 "compare_and_write": false, 00:21:55.664 "abort": false, 00:21:55.664 "seek_hole": false, 00:21:55.664 "seek_data": false, 00:21:55.664 "copy": false, 00:21:55.664 "nvme_iov_md": false 00:21:55.664 }, 00:21:55.664 "memory_domains": [ 00:21:55.664 { 00:21:55.664 "dma_device_id": "system", 00:21:55.664 "dma_device_type": 1 00:21:55.664 }, 00:21:55.664 { 00:21:55.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.664 "dma_device_type": 2 00:21:55.664 }, 00:21:55.664 { 00:21:55.664 "dma_device_id": "system", 00:21:55.664 "dma_device_type": 1 00:21:55.664 }, 00:21:55.664 { 00:21:55.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.664 "dma_device_type": 2 00:21:55.664 } 00:21:55.664 ], 00:21:55.664 "driver_specific": { 00:21:55.664 "raid": { 00:21:55.664 "uuid": "54575daa-bcaf-4cd7-ad10-eb9b6182aa67", 00:21:55.664 "strip_size_kb": 0, 00:21:55.664 "state": "online", 00:21:55.664 "raid_level": "raid1", 00:21:55.664 "superblock": true, 00:21:55.664 "num_base_bdevs": 2, 00:21:55.664 "num_base_bdevs_discovered": 2, 00:21:55.664 "num_base_bdevs_operational": 2, 00:21:55.664 "base_bdevs_list": [ 00:21:55.664 { 00:21:55.664 "name": "BaseBdev1", 00:21:55.664 "uuid": "924543b3-8eda-48a4-8233-06331e78d205", 00:21:55.664 "is_configured": true, 00:21:55.664 "data_offset": 256, 00:21:55.664 "data_size": 7936 00:21:55.664 }, 00:21:55.664 { 00:21:55.664 "name": "BaseBdev2", 00:21:55.664 "uuid": "db439270-5816-413a-8477-e593dfe149f3", 00:21:55.664 "is_configured": true, 00:21:55.664 "data_offset": 256, 00:21:55.664 "data_size": 7936 00:21:55.664 } 00:21:55.664 ] 00:21:55.664 } 00:21:55.664 } 00:21:55.664 }' 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:55.664 BaseBdev2' 00:21:55.664 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.922 [2024-11-27 14:21:26.287916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.922 "name": "Existed_Raid", 00:21:55.922 "uuid": "54575daa-bcaf-4cd7-ad10-eb9b6182aa67", 00:21:55.922 "strip_size_kb": 0, 00:21:55.922 "state": "online", 00:21:55.922 "raid_level": "raid1", 00:21:55.922 "superblock": true, 00:21:55.922 "num_base_bdevs": 2, 00:21:55.922 "num_base_bdevs_discovered": 1, 00:21:55.922 "num_base_bdevs_operational": 1, 00:21:55.922 "base_bdevs_list": [ 00:21:55.922 { 00:21:55.922 "name": null, 00:21:55.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.922 "is_configured": false, 00:21:55.922 "data_offset": 0, 00:21:55.922 "data_size": 7936 00:21:55.922 }, 00:21:55.922 { 00:21:55.922 "name": "BaseBdev2", 00:21:55.922 "uuid": "db439270-5816-413a-8477-e593dfe149f3", 00:21:55.922 "is_configured": true, 00:21:55.922 "data_offset": 256, 00:21:55.922 "data_size": 7936 00:21:55.922 } 00:21:55.922 ] 00:21:55.922 }' 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.922 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.490 14:21:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.490 [2024-11-27 14:21:26.943209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:56.490 [2024-11-27 14:21:26.943485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.749 [2024-11-27 14:21:27.033922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.749 [2024-11-27 14:21:27.033994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.749 [2024-11-27 14:21:27.034014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86592 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86592 ']' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86592 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86592 00:21:56.749 killing process with pid 86592 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86592' 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86592 00:21:56.749 [2024-11-27 14:21:27.123898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:56.749 14:21:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86592 00:21:56.749 [2024-11-27 14:21:27.138657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.121 14:21:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:21:58.121 00:21:58.121 real 0m5.629s 00:21:58.121 user 0m8.531s 00:21:58.121 sys 0m0.781s 00:21:58.121 14:21:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.121 ************************************ 00:21:58.121 END TEST raid_state_function_test_sb_4k 00:21:58.121 14:21:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.121 ************************************ 00:21:58.122 14:21:28 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:58.122 14:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:58.122 14:21:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.122 14:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:58.122 ************************************ 00:21:58.122 START TEST raid_superblock_test_4k 00:21:58.122 ************************************ 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86840 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86840 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86840 ']' 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.122 14:21:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.122 [2024-11-27 14:21:28.403693] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:21:58.122 [2024-11-27 14:21:28.404170] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86840 ] 00:21:58.122 [2024-11-27 14:21:28.588282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.381 [2024-11-27 14:21:28.722321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.638 [2024-11-27 14:21:28.930167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.638 [2024-11-27 14:21:28.930403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.894 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.154 malloc1 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.154 [2024-11-27 14:21:29.455210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:59.154 [2024-11-27 14:21:29.455288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.154 [2024-11-27 14:21:29.455320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:59.154 [2024-11-27 14:21:29.455334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.154 [2024-11-27 14:21:29.458242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.154 [2024-11-27 14:21:29.458289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:59.154 pt1 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.154 malloc2 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.154 [2024-11-27 14:21:29.508699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:59.154 [2024-11-27 14:21:29.508912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.154 [2024-11-27 14:21:29.508996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:59.154 [2024-11-27 14:21:29.509116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.154 [2024-11-27 14:21:29.512027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.154 pt2 00:21:59.154 [2024-11-27 14:21:29.512180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.154 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 [2024-11-27 14:21:29.516813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:59.155 [2024-11-27 14:21:29.519379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:59.155 [2024-11-27 14:21:29.519616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:59.155 [2024-11-27 14:21:29.519640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:59.155 [2024-11-27 14:21:29.520012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:59.155 [2024-11-27 14:21:29.520217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:59.155 [2024-11-27 14:21:29.520242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:59.155 [2024-11-27 14:21:29.520426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.155 "name": "raid_bdev1", 00:21:59.155 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:21:59.155 "strip_size_kb": 0, 00:21:59.155 "state": "online", 00:21:59.155 "raid_level": "raid1", 00:21:59.155 "superblock": true, 00:21:59.155 "num_base_bdevs": 2, 00:21:59.155 "num_base_bdevs_discovered": 2, 00:21:59.155 "num_base_bdevs_operational": 2, 00:21:59.155 "base_bdevs_list": [ 00:21:59.155 { 00:21:59.155 "name": "pt1", 00:21:59.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:59.155 "is_configured": true, 00:21:59.155 "data_offset": 256, 00:21:59.155 "data_size": 7936 00:21:59.155 }, 00:21:59.155 { 00:21:59.155 "name": "pt2", 00:21:59.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.155 "is_configured": true, 00:21:59.155 "data_offset": 256, 00:21:59.155 "data_size": 7936 00:21:59.155 } 00:21:59.155 ] 00:21:59.155 }' 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.155 14:21:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.720 [2024-11-27 14:21:30.037328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:59.720 "name": "raid_bdev1", 00:21:59.720 "aliases": [ 00:21:59.720 "5f67ca7f-541b-4b2a-a789-19019f2baf18" 00:21:59.720 ], 00:21:59.720 "product_name": "Raid Volume", 00:21:59.720 "block_size": 4096, 00:21:59.720 "num_blocks": 7936, 00:21:59.720 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:21:59.720 "assigned_rate_limits": { 00:21:59.720 "rw_ios_per_sec": 0, 00:21:59.720 "rw_mbytes_per_sec": 0, 00:21:59.720 "r_mbytes_per_sec": 0, 00:21:59.720 "w_mbytes_per_sec": 0 00:21:59.720 }, 00:21:59.720 "claimed": false, 00:21:59.720 "zoned": false, 00:21:59.720 "supported_io_types": { 00:21:59.720 "read": true, 00:21:59.720 "write": true, 00:21:59.720 "unmap": false, 00:21:59.720 "flush": false, 00:21:59.720 "reset": true, 00:21:59.720 "nvme_admin": false, 00:21:59.720 "nvme_io": false, 00:21:59.720 "nvme_io_md": false, 00:21:59.720 "write_zeroes": true, 00:21:59.720 "zcopy": false, 00:21:59.720 "get_zone_info": false, 00:21:59.720 "zone_management": false, 00:21:59.720 "zone_append": false, 00:21:59.720 "compare": false, 00:21:59.720 "compare_and_write": false, 00:21:59.720 "abort": false, 00:21:59.720 "seek_hole": false, 00:21:59.720 "seek_data": false, 00:21:59.720 "copy": false, 00:21:59.720 "nvme_iov_md": false 00:21:59.720 }, 00:21:59.720 "memory_domains": [ 00:21:59.720 { 00:21:59.720 "dma_device_id": "system", 00:21:59.720 "dma_device_type": 1 00:21:59.720 }, 00:21:59.720 { 00:21:59.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.720 "dma_device_type": 2 00:21:59.720 }, 00:21:59.720 { 00:21:59.720 "dma_device_id": "system", 00:21:59.720 "dma_device_type": 1 00:21:59.720 }, 00:21:59.720 { 00:21:59.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.720 "dma_device_type": 2 00:21:59.720 } 00:21:59.720 ], 00:21:59.720 "driver_specific": { 00:21:59.720 "raid": { 00:21:59.720 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:21:59.720 "strip_size_kb": 0, 00:21:59.720 "state": "online", 00:21:59.720 "raid_level": "raid1", 00:21:59.720 "superblock": true, 00:21:59.720 "num_base_bdevs": 2, 00:21:59.720 "num_base_bdevs_discovered": 2, 00:21:59.720 "num_base_bdevs_operational": 2, 00:21:59.720 "base_bdevs_list": [ 00:21:59.720 { 00:21:59.720 "name": "pt1", 00:21:59.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:59.720 "is_configured": true, 00:21:59.720 "data_offset": 256, 00:21:59.720 "data_size": 7936 00:21:59.720 }, 00:21:59.720 { 00:21:59.720 "name": "pt2", 00:21:59.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.720 "is_configured": true, 00:21:59.720 "data_offset": 256, 00:21:59.720 "data_size": 7936 00:21:59.720 } 00:21:59.720 ] 00:21:59.720 } 00:21:59.720 } 00:21:59.720 }' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:59.720 pt2' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.720 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.978 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.979 [2024-11-27 14:21:30.317369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5f67ca7f-541b-4b2a-a789-19019f2baf18 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5f67ca7f-541b-4b2a-a789-19019f2baf18 ']' 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.979 [2024-11-27 14:21:30.376997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.979 [2024-11-27 14:21:30.377151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.979 [2024-11-27 14:21:30.377427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.979 [2024-11-27 14:21:30.377518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.979 [2024-11-27 14:21:30.377538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.979 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.237 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.237 [2024-11-27 14:21:30.521077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:00.237 [2024-11-27 14:21:30.523783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:00.238 [2024-11-27 14:21:30.524009] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:00.238 [2024-11-27 14:21:30.524112] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:00.238 [2024-11-27 14:21:30.524140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.238 [2024-11-27 14:21:30.524156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:00.238 request: 00:22:00.238 { 00:22:00.238 "name": "raid_bdev1", 00:22:00.238 "raid_level": "raid1", 00:22:00.238 "base_bdevs": [ 00:22:00.238 "malloc1", 00:22:00.238 "malloc2" 00:22:00.238 ], 00:22:00.238 "superblock": false, 00:22:00.238 "method": "bdev_raid_create", 00:22:00.238 "req_id": 1 00:22:00.238 } 00:22:00.238 Got JSON-RPC error response 00:22:00.238 response: 00:22:00.238 { 00:22:00.238 "code": -17, 00:22:00.238 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:00.238 } 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.238 [2024-11-27 14:21:30.585083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:00.238 [2024-11-27 14:21:30.585296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.238 [2024-11-27 14:21:30.585373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:00.238 [2024-11-27 14:21:30.585609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.238 [2024-11-27 14:21:30.588587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.238 [2024-11-27 14:21:30.588638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:00.238 [2024-11-27 14:21:30.588748] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:00.238 [2024-11-27 14:21:30.588849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:00.238 pt1 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.238 "name": "raid_bdev1", 00:22:00.238 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:00.238 "strip_size_kb": 0, 00:22:00.238 "state": "configuring", 00:22:00.238 "raid_level": "raid1", 00:22:00.238 "superblock": true, 00:22:00.238 "num_base_bdevs": 2, 00:22:00.238 "num_base_bdevs_discovered": 1, 00:22:00.238 "num_base_bdevs_operational": 2, 00:22:00.238 "base_bdevs_list": [ 00:22:00.238 { 00:22:00.238 "name": "pt1", 00:22:00.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:00.238 "is_configured": true, 00:22:00.238 "data_offset": 256, 00:22:00.238 "data_size": 7936 00:22:00.238 }, 00:22:00.238 { 00:22:00.238 "name": null, 00:22:00.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:00.238 "is_configured": false, 00:22:00.238 "data_offset": 256, 00:22:00.238 "data_size": 7936 00:22:00.238 } 00:22:00.238 ] 00:22:00.238 }' 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.238 14:21:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.805 [2024-11-27 14:21:31.161292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:00.805 [2024-11-27 14:21:31.161518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.805 [2024-11-27 14:21:31.161667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:00.805 [2024-11-27 14:21:31.161808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.805 [2024-11-27 14:21:31.162577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.805 [2024-11-27 14:21:31.162741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:00.805 [2024-11-27 14:21:31.162891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:00.805 [2024-11-27 14:21:31.162936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:00.805 [2024-11-27 14:21:31.163094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:00.805 [2024-11-27 14:21:31.163114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:00.805 [2024-11-27 14:21:31.163424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:00.805 [2024-11-27 14:21:31.163627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:00.805 [2024-11-27 14:21:31.163643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:00.805 [2024-11-27 14:21:31.163835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.805 pt2 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.805 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.806 "name": "raid_bdev1", 00:22:00.806 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:00.806 "strip_size_kb": 0, 00:22:00.806 "state": "online", 00:22:00.806 "raid_level": "raid1", 00:22:00.806 "superblock": true, 00:22:00.806 "num_base_bdevs": 2, 00:22:00.806 "num_base_bdevs_discovered": 2, 00:22:00.806 "num_base_bdevs_operational": 2, 00:22:00.806 "base_bdevs_list": [ 00:22:00.806 { 00:22:00.806 "name": "pt1", 00:22:00.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:00.806 "is_configured": true, 00:22:00.806 "data_offset": 256, 00:22:00.806 "data_size": 7936 00:22:00.806 }, 00:22:00.806 { 00:22:00.806 "name": "pt2", 00:22:00.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:00.806 "is_configured": true, 00:22:00.806 "data_offset": 256, 00:22:00.806 "data_size": 7936 00:22:00.806 } 00:22:00.806 ] 00:22:00.806 }' 00:22:00.806 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.806 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.372 [2024-11-27 14:21:31.673733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.372 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:01.372 "name": "raid_bdev1", 00:22:01.372 "aliases": [ 00:22:01.372 "5f67ca7f-541b-4b2a-a789-19019f2baf18" 00:22:01.372 ], 00:22:01.372 "product_name": "Raid Volume", 00:22:01.372 "block_size": 4096, 00:22:01.372 "num_blocks": 7936, 00:22:01.372 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:01.372 "assigned_rate_limits": { 00:22:01.372 "rw_ios_per_sec": 0, 00:22:01.372 "rw_mbytes_per_sec": 0, 00:22:01.372 "r_mbytes_per_sec": 0, 00:22:01.372 "w_mbytes_per_sec": 0 00:22:01.372 }, 00:22:01.372 "claimed": false, 00:22:01.372 "zoned": false, 00:22:01.372 "supported_io_types": { 00:22:01.372 "read": true, 00:22:01.372 "write": true, 00:22:01.372 "unmap": false, 00:22:01.372 "flush": false, 00:22:01.372 "reset": true, 00:22:01.372 "nvme_admin": false, 00:22:01.372 "nvme_io": false, 00:22:01.372 "nvme_io_md": false, 00:22:01.372 "write_zeroes": true, 00:22:01.372 "zcopy": false, 00:22:01.373 "get_zone_info": false, 00:22:01.373 "zone_management": false, 00:22:01.373 "zone_append": false, 00:22:01.373 "compare": false, 00:22:01.373 "compare_and_write": false, 00:22:01.373 "abort": false, 00:22:01.373 "seek_hole": false, 00:22:01.373 "seek_data": false, 00:22:01.373 "copy": false, 00:22:01.373 "nvme_iov_md": false 00:22:01.373 }, 00:22:01.373 "memory_domains": [ 00:22:01.373 { 00:22:01.373 "dma_device_id": "system", 00:22:01.373 "dma_device_type": 1 00:22:01.373 }, 00:22:01.373 { 00:22:01.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.373 "dma_device_type": 2 00:22:01.373 }, 00:22:01.373 { 00:22:01.373 "dma_device_id": "system", 00:22:01.373 "dma_device_type": 1 00:22:01.373 }, 00:22:01.373 { 00:22:01.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.373 "dma_device_type": 2 00:22:01.373 } 00:22:01.373 ], 00:22:01.373 "driver_specific": { 00:22:01.373 "raid": { 00:22:01.373 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:01.373 "strip_size_kb": 0, 00:22:01.373 "state": "online", 00:22:01.373 "raid_level": "raid1", 00:22:01.373 "superblock": true, 00:22:01.373 "num_base_bdevs": 2, 00:22:01.373 "num_base_bdevs_discovered": 2, 00:22:01.373 "num_base_bdevs_operational": 2, 00:22:01.373 "base_bdevs_list": [ 00:22:01.373 { 00:22:01.373 "name": "pt1", 00:22:01.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:01.373 "is_configured": true, 00:22:01.373 "data_offset": 256, 00:22:01.373 "data_size": 7936 00:22:01.373 }, 00:22:01.373 { 00:22:01.373 "name": "pt2", 00:22:01.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:01.373 "is_configured": true, 00:22:01.373 "data_offset": 256, 00:22:01.373 "data_size": 7936 00:22:01.373 } 00:22:01.373 ] 00:22:01.373 } 00:22:01.373 } 00:22:01.373 }' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:01.373 pt2' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.373 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:01.631 [2024-11-27 14:21:31.925807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5f67ca7f-541b-4b2a-a789-19019f2baf18 '!=' 5f67ca7f-541b-4b2a-a789-19019f2baf18 ']' 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.631 [2024-11-27 14:21:31.985551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.631 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.632 14:21:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.632 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.632 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.632 "name": "raid_bdev1", 00:22:01.632 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:01.632 "strip_size_kb": 0, 00:22:01.632 "state": "online", 00:22:01.632 "raid_level": "raid1", 00:22:01.632 "superblock": true, 00:22:01.632 "num_base_bdevs": 2, 00:22:01.632 "num_base_bdevs_discovered": 1, 00:22:01.632 "num_base_bdevs_operational": 1, 00:22:01.632 "base_bdevs_list": [ 00:22:01.632 { 00:22:01.632 "name": null, 00:22:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.632 "is_configured": false, 00:22:01.632 "data_offset": 0, 00:22:01.632 "data_size": 7936 00:22:01.632 }, 00:22:01.632 { 00:22:01.632 "name": "pt2", 00:22:01.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:01.632 "is_configured": true, 00:22:01.632 "data_offset": 256, 00:22:01.632 "data_size": 7936 00:22:01.632 } 00:22:01.632 ] 00:22:01.632 }' 00:22:01.632 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.632 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.199 [2024-11-27 14:21:32.497681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.199 [2024-11-27 14:21:32.497717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.199 [2024-11-27 14:21:32.497831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.199 [2024-11-27 14:21:32.497901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.199 [2024-11-27 14:21:32.497921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.199 [2024-11-27 14:21:32.577673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.199 [2024-11-27 14:21:32.577892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.199 [2024-11-27 14:21:32.577964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:02.199 [2024-11-27 14:21:32.578086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.199 [2024-11-27 14:21:32.581028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.199 [2024-11-27 14:21:32.581079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.199 [2024-11-27 14:21:32.581188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:02.199 [2024-11-27 14:21:32.581257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.199 [2024-11-27 14:21:32.581398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:02.199 [2024-11-27 14:21:32.581420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:02.199 [2024-11-27 14:21:32.581707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:02.199 [2024-11-27 14:21:32.581944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:02.199 [2024-11-27 14:21:32.581961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:02.199 pt2 00:22:02.199 [2024-11-27 14:21:32.582192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.199 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.200 "name": "raid_bdev1", 00:22:02.200 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:02.200 "strip_size_kb": 0, 00:22:02.200 "state": "online", 00:22:02.200 "raid_level": "raid1", 00:22:02.200 "superblock": true, 00:22:02.200 "num_base_bdevs": 2, 00:22:02.200 "num_base_bdevs_discovered": 1, 00:22:02.200 "num_base_bdevs_operational": 1, 00:22:02.200 "base_bdevs_list": [ 00:22:02.200 { 00:22:02.200 "name": null, 00:22:02.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.200 "is_configured": false, 00:22:02.200 "data_offset": 256, 00:22:02.200 "data_size": 7936 00:22:02.200 }, 00:22:02.200 { 00:22:02.200 "name": "pt2", 00:22:02.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:02.200 "is_configured": true, 00:22:02.200 "data_offset": 256, 00:22:02.200 "data_size": 7936 00:22:02.200 } 00:22:02.200 ] 00:22:02.200 }' 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.200 14:21:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.766 [2024-11-27 14:21:33.122247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.766 [2024-11-27 14:21:33.122286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.766 [2024-11-27 14:21:33.122381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.766 [2024-11-27 14:21:33.122453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.766 [2024-11-27 14:21:33.122476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.766 [2024-11-27 14:21:33.186335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:02.766 [2024-11-27 14:21:33.186564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.766 [2024-11-27 14:21:33.186718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:02.766 [2024-11-27 14:21:33.186860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.766 [2024-11-27 14:21:33.190035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.766 [2024-11-27 14:21:33.190202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:02.766 [2024-11-27 14:21:33.190497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:02.766 [2024-11-27 14:21:33.190670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:02.766 [2024-11-27 14:21:33.191063] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:02.766 [2024-11-27 14:21:33.191220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.766 pt1 00:22:02.766 [2024-11-27 14:21:33.191346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:02.766 [2024-11-27 14:21:33.191550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.766 [2024-11-27 14:21:33.191844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:02.766 [2024-11-27 14:21:33.191955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:02.766 [2024-11-27 14:21:33.192384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.766 [2024-11-27 14:21:33.192772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.766 [2024-11-27 14:21:33.192963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.766 _bdev 0x617000008900 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:02.766 [2024-11-27 14:21:33.193359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.766 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.766 "name": "raid_bdev1", 00:22:02.766 "uuid": "5f67ca7f-541b-4b2a-a789-19019f2baf18", 00:22:02.766 "strip_size_kb": 0, 00:22:02.766 "state": "online", 00:22:02.767 "raid_level": "raid1", 00:22:02.767 "superblock": true, 00:22:02.767 "num_base_bdevs": 2, 00:22:02.767 "num_base_bdevs_discovered": 1, 00:22:02.767 "num_base_bdevs_operational": 1, 00:22:02.767 "base_bdevs_list": [ 00:22:02.767 { 00:22:02.767 "name": null, 00:22:02.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.767 "is_configured": false, 00:22:02.767 "data_offset": 256, 00:22:02.767 "data_size": 7936 00:22:02.767 }, 00:22:02.767 { 00:22:02.767 "name": "pt2", 00:22:02.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:02.767 "is_configured": true, 00:22:02.767 "data_offset": 256, 00:22:02.767 "data_size": 7936 00:22:02.767 } 00:22:02.767 ] 00:22:02.767 }' 00:22:02.767 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.767 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:03.332 [2024-11-27 14:21:33.747029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.332 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5f67ca7f-541b-4b2a-a789-19019f2baf18 '!=' 5f67ca7f-541b-4b2a-a789-19019f2baf18 ']' 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86840 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86840 ']' 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86840 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86840 00:22:03.333 killing process with pid 86840 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86840' 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86840 00:22:03.333 [2024-11-27 14:21:33.825332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:03.333 14:21:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86840 00:22:03.333 [2024-11-27 14:21:33.825478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.333 [2024-11-27 14:21:33.825561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:03.333 [2024-11-27 14:21:33.825597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:03.590 [2024-11-27 14:21:34.020468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:04.968 ************************************ 00:22:04.968 END TEST raid_superblock_test_4k 00:22:04.968 ************************************ 00:22:04.968 14:21:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:22:04.968 00:22:04.968 real 0m6.818s 00:22:04.968 user 0m10.796s 00:22:04.968 sys 0m0.972s 00:22:04.968 14:21:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.968 14:21:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.968 14:21:35 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:22:04.968 14:21:35 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:04.968 14:21:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:04.968 14:21:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.968 14:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:04.968 ************************************ 00:22:04.968 START TEST raid_rebuild_test_sb_4k 00:22:04.968 ************************************ 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87174 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87174 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87174 ']' 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.968 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.969 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.969 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.969 [2024-11-27 14:21:35.276498] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:22:04.969 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:04.969 Zero copy mechanism will not be used. 00:22:04.969 [2024-11-27 14:21:35.277376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87174 ] 00:22:04.969 [2024-11-27 14:21:35.463968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.227 [2024-11-27 14:21:35.594845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.486 [2024-11-27 14:21:35.808040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.486 [2024-11-27 14:21:35.808095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.745 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.745 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:05.745 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:05.745 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:22:05.745 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.745 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 BaseBdev1_malloc 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 [2024-11-27 14:21:36.262172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:06.004 [2024-11-27 14:21:36.262387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.004 [2024-11-27 14:21:36.262604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:06.004 [2024-11-27 14:21:36.262639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.004 [2024-11-27 14:21:36.265379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.004 [2024-11-27 14:21:36.265432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:06.004 BaseBdev1 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 BaseBdev2_malloc 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.004 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 [2024-11-27 14:21:36.313931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:06.004 [2024-11-27 14:21:36.314011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.005 [2024-11-27 14:21:36.314044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:06.005 [2024-11-27 14:21:36.314061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.005 [2024-11-27 14:21:36.316764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.005 BaseBdev2 00:22:06.005 [2024-11-27 14:21:36.316947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.005 spare_malloc 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.005 spare_delay 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.005 [2024-11-27 14:21:36.379867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:06.005 [2024-11-27 14:21:36.380077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.005 [2024-11-27 14:21:36.380242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:06.005 [2024-11-27 14:21:36.380362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.005 [2024-11-27 14:21:36.383178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.005 [2024-11-27 14:21:36.383231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:06.005 spare 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.005 [2024-11-27 14:21:36.388086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.005 [2024-11-27 14:21:36.390593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.005 [2024-11-27 14:21:36.390988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:06.005 [2024-11-27 14:21:36.391127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:06.005 [2024-11-27 14:21:36.391463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:06.005 [2024-11-27 14:21:36.391701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:06.005 [2024-11-27 14:21:36.391718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:06.005 [2024-11-27 14:21:36.392017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.005 "name": "raid_bdev1", 00:22:06.005 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:06.005 "strip_size_kb": 0, 00:22:06.005 "state": "online", 00:22:06.005 "raid_level": "raid1", 00:22:06.005 "superblock": true, 00:22:06.005 "num_base_bdevs": 2, 00:22:06.005 "num_base_bdevs_discovered": 2, 00:22:06.005 "num_base_bdevs_operational": 2, 00:22:06.005 "base_bdevs_list": [ 00:22:06.005 { 00:22:06.005 "name": "BaseBdev1", 00:22:06.005 "uuid": "0e728828-978b-5000-b567-7d96285fb2c5", 00:22:06.005 "is_configured": true, 00:22:06.005 "data_offset": 256, 00:22:06.005 "data_size": 7936 00:22:06.005 }, 00:22:06.005 { 00:22:06.005 "name": "BaseBdev2", 00:22:06.005 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:06.005 "is_configured": true, 00:22:06.005 "data_offset": 256, 00:22:06.005 "data_size": 7936 00:22:06.005 } 00:22:06.005 ] 00:22:06.005 }' 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.005 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.572 [2024-11-27 14:21:36.932609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.572 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:06.572 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:06.831 [2024-11-27 14:21:37.324418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:06.831 /dev/nbd0 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.089 1+0 records in 00:22:07.089 1+0 records out 00:22:07.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454601 s, 9.0 MB/s 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:07.089 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:08.027 7936+0 records in 00:22:08.027 7936+0 records out 00:22:08.027 32505856 bytes (33 MB, 31 MiB) copied, 0.981172 s, 33.1 MB/s 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.027 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:08.287 [2024-11-27 14:21:38.653131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 [2024-11-27 14:21:38.689246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.287 "name": "raid_bdev1", 00:22:08.287 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:08.287 "strip_size_kb": 0, 00:22:08.287 "state": "online", 00:22:08.287 "raid_level": "raid1", 00:22:08.287 "superblock": true, 00:22:08.287 "num_base_bdevs": 2, 00:22:08.287 "num_base_bdevs_discovered": 1, 00:22:08.287 "num_base_bdevs_operational": 1, 00:22:08.287 "base_bdevs_list": [ 00:22:08.287 { 00:22:08.287 "name": null, 00:22:08.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.287 "is_configured": false, 00:22:08.287 "data_offset": 0, 00:22:08.287 "data_size": 7936 00:22:08.287 }, 00:22:08.287 { 00:22:08.287 "name": "BaseBdev2", 00:22:08.287 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:08.287 "is_configured": true, 00:22:08.287 "data_offset": 256, 00:22:08.287 "data_size": 7936 00:22:08.287 } 00:22:08.287 ] 00:22:08.287 }' 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.287 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.853 14:21:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:08.853 14:21:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.853 14:21:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.853 [2024-11-27 14:21:39.241416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:08.853 [2024-11-27 14:21:39.257734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:08.853 14:21:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.853 14:21:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:08.853 [2024-11-27 14:21:39.260257] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.788 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.047 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.047 "name": "raid_bdev1", 00:22:10.047 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:10.047 "strip_size_kb": 0, 00:22:10.047 "state": "online", 00:22:10.047 "raid_level": "raid1", 00:22:10.047 "superblock": true, 00:22:10.047 "num_base_bdevs": 2, 00:22:10.047 "num_base_bdevs_discovered": 2, 00:22:10.047 "num_base_bdevs_operational": 2, 00:22:10.047 "process": { 00:22:10.047 "type": "rebuild", 00:22:10.047 "target": "spare", 00:22:10.047 "progress": { 00:22:10.047 "blocks": 2560, 00:22:10.047 "percent": 32 00:22:10.047 } 00:22:10.047 }, 00:22:10.047 "base_bdevs_list": [ 00:22:10.047 { 00:22:10.047 "name": "spare", 00:22:10.047 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:10.047 "is_configured": true, 00:22:10.047 "data_offset": 256, 00:22:10.047 "data_size": 7936 00:22:10.047 }, 00:22:10.047 { 00:22:10.047 "name": "BaseBdev2", 00:22:10.048 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:10.048 "is_configured": true, 00:22:10.048 "data_offset": 256, 00:22:10.048 "data_size": 7936 00:22:10.048 } 00:22:10.048 ] 00:22:10.048 }' 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.048 [2024-11-27 14:21:40.421907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.048 [2024-11-27 14:21:40.469251] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:10.048 [2024-11-27 14:21:40.469507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.048 [2024-11-27 14:21:40.469534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.048 [2024-11-27 14:21:40.469549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.048 "name": "raid_bdev1", 00:22:10.048 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:10.048 "strip_size_kb": 0, 00:22:10.048 "state": "online", 00:22:10.048 "raid_level": "raid1", 00:22:10.048 "superblock": true, 00:22:10.048 "num_base_bdevs": 2, 00:22:10.048 "num_base_bdevs_discovered": 1, 00:22:10.048 "num_base_bdevs_operational": 1, 00:22:10.048 "base_bdevs_list": [ 00:22:10.048 { 00:22:10.048 "name": null, 00:22:10.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.048 "is_configured": false, 00:22:10.048 "data_offset": 0, 00:22:10.048 "data_size": 7936 00:22:10.048 }, 00:22:10.048 { 00:22:10.048 "name": "BaseBdev2", 00:22:10.048 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:10.048 "is_configured": true, 00:22:10.048 "data_offset": 256, 00:22:10.048 "data_size": 7936 00:22:10.048 } 00:22:10.048 ] 00:22:10.048 }' 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.048 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.617 14:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.617 "name": "raid_bdev1", 00:22:10.617 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:10.617 "strip_size_kb": 0, 00:22:10.617 "state": "online", 00:22:10.617 "raid_level": "raid1", 00:22:10.617 "superblock": true, 00:22:10.617 "num_base_bdevs": 2, 00:22:10.617 "num_base_bdevs_discovered": 1, 00:22:10.617 "num_base_bdevs_operational": 1, 00:22:10.617 "base_bdevs_list": [ 00:22:10.617 { 00:22:10.617 "name": null, 00:22:10.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.617 "is_configured": false, 00:22:10.617 "data_offset": 0, 00:22:10.617 "data_size": 7936 00:22:10.617 }, 00:22:10.617 { 00:22:10.617 "name": "BaseBdev2", 00:22:10.617 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:10.617 "is_configured": true, 00:22:10.617 "data_offset": 256, 00:22:10.617 "data_size": 7936 00:22:10.617 } 00:22:10.617 ] 00:22:10.617 }' 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:10.617 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.876 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:10.876 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.876 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.876 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.876 [2024-11-27 14:21:41.168429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.876 [2024-11-27 14:21:41.184571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:10.876 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.876 14:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:10.876 [2024-11-27 14:21:41.187221] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.811 "name": "raid_bdev1", 00:22:11.811 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:11.811 "strip_size_kb": 0, 00:22:11.811 "state": "online", 00:22:11.811 "raid_level": "raid1", 00:22:11.811 "superblock": true, 00:22:11.811 "num_base_bdevs": 2, 00:22:11.811 "num_base_bdevs_discovered": 2, 00:22:11.811 "num_base_bdevs_operational": 2, 00:22:11.811 "process": { 00:22:11.811 "type": "rebuild", 00:22:11.811 "target": "spare", 00:22:11.811 "progress": { 00:22:11.811 "blocks": 2560, 00:22:11.811 "percent": 32 00:22:11.811 } 00:22:11.811 }, 00:22:11.811 "base_bdevs_list": [ 00:22:11.811 { 00:22:11.811 "name": "spare", 00:22:11.811 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:11.811 "is_configured": true, 00:22:11.811 "data_offset": 256, 00:22:11.811 "data_size": 7936 00:22:11.811 }, 00:22:11.811 { 00:22:11.811 "name": "BaseBdev2", 00:22:11.811 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:11.811 "is_configured": true, 00:22:11.811 "data_offset": 256, 00:22:11.811 "data_size": 7936 00:22:11.811 } 00:22:11.811 ] 00:22:11.811 }' 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.811 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:12.070 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=744 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:12.070 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.071 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.071 "name": "raid_bdev1", 00:22:12.071 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:12.071 "strip_size_kb": 0, 00:22:12.071 "state": "online", 00:22:12.071 "raid_level": "raid1", 00:22:12.071 "superblock": true, 00:22:12.071 "num_base_bdevs": 2, 00:22:12.071 "num_base_bdevs_discovered": 2, 00:22:12.071 "num_base_bdevs_operational": 2, 00:22:12.071 "process": { 00:22:12.071 "type": "rebuild", 00:22:12.071 "target": "spare", 00:22:12.071 "progress": { 00:22:12.071 "blocks": 2816, 00:22:12.071 "percent": 35 00:22:12.071 } 00:22:12.071 }, 00:22:12.071 "base_bdevs_list": [ 00:22:12.071 { 00:22:12.071 "name": "spare", 00:22:12.071 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:12.071 "is_configured": true, 00:22:12.071 "data_offset": 256, 00:22:12.071 "data_size": 7936 00:22:12.071 }, 00:22:12.071 { 00:22:12.071 "name": "BaseBdev2", 00:22:12.071 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:12.071 "is_configured": true, 00:22:12.071 "data_offset": 256, 00:22:12.071 "data_size": 7936 00:22:12.071 } 00:22:12.071 ] 00:22:12.071 }' 00:22:12.071 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.071 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.071 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.071 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.071 14:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.448 "name": "raid_bdev1", 00:22:13.448 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:13.448 "strip_size_kb": 0, 00:22:13.448 "state": "online", 00:22:13.448 "raid_level": "raid1", 00:22:13.448 "superblock": true, 00:22:13.448 "num_base_bdevs": 2, 00:22:13.448 "num_base_bdevs_discovered": 2, 00:22:13.448 "num_base_bdevs_operational": 2, 00:22:13.448 "process": { 00:22:13.448 "type": "rebuild", 00:22:13.448 "target": "spare", 00:22:13.448 "progress": { 00:22:13.448 "blocks": 5888, 00:22:13.448 "percent": 74 00:22:13.448 } 00:22:13.448 }, 00:22:13.448 "base_bdevs_list": [ 00:22:13.448 { 00:22:13.448 "name": "spare", 00:22:13.448 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:13.448 "is_configured": true, 00:22:13.448 "data_offset": 256, 00:22:13.448 "data_size": 7936 00:22:13.448 }, 00:22:13.448 { 00:22:13.448 "name": "BaseBdev2", 00:22:13.448 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:13.448 "is_configured": true, 00:22:13.448 "data_offset": 256, 00:22:13.448 "data_size": 7936 00:22:13.448 } 00:22:13.448 ] 00:22:13.448 }' 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.448 14:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:14.040 [2024-11-27 14:21:44.310221] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:14.040 [2024-11-27 14:21:44.310331] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:14.040 [2024-11-27 14:21:44.310497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.298 "name": "raid_bdev1", 00:22:14.298 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:14.298 "strip_size_kb": 0, 00:22:14.298 "state": "online", 00:22:14.298 "raid_level": "raid1", 00:22:14.298 "superblock": true, 00:22:14.298 "num_base_bdevs": 2, 00:22:14.298 "num_base_bdevs_discovered": 2, 00:22:14.298 "num_base_bdevs_operational": 2, 00:22:14.298 "base_bdevs_list": [ 00:22:14.298 { 00:22:14.298 "name": "spare", 00:22:14.298 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:14.298 "is_configured": true, 00:22:14.298 "data_offset": 256, 00:22:14.298 "data_size": 7936 00:22:14.298 }, 00:22:14.298 { 00:22:14.298 "name": "BaseBdev2", 00:22:14.298 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:14.298 "is_configured": true, 00:22:14.298 "data_offset": 256, 00:22:14.298 "data_size": 7936 00:22:14.298 } 00:22:14.298 ] 00:22:14.298 }' 00:22:14.298 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.556 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.556 "name": "raid_bdev1", 00:22:14.556 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:14.556 "strip_size_kb": 0, 00:22:14.556 "state": "online", 00:22:14.556 "raid_level": "raid1", 00:22:14.556 "superblock": true, 00:22:14.556 "num_base_bdevs": 2, 00:22:14.556 "num_base_bdevs_discovered": 2, 00:22:14.557 "num_base_bdevs_operational": 2, 00:22:14.557 "base_bdevs_list": [ 00:22:14.557 { 00:22:14.557 "name": "spare", 00:22:14.557 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:14.557 "is_configured": true, 00:22:14.557 "data_offset": 256, 00:22:14.557 "data_size": 7936 00:22:14.557 }, 00:22:14.557 { 00:22:14.557 "name": "BaseBdev2", 00:22:14.557 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:14.557 "is_configured": true, 00:22:14.557 "data_offset": 256, 00:22:14.557 "data_size": 7936 00:22:14.557 } 00:22:14.557 ] 00:22:14.557 }' 00:22:14.557 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.557 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.557 14:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:14.557 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.815 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.815 "name": "raid_bdev1", 00:22:14.815 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:14.815 "strip_size_kb": 0, 00:22:14.815 "state": "online", 00:22:14.815 "raid_level": "raid1", 00:22:14.815 "superblock": true, 00:22:14.815 "num_base_bdevs": 2, 00:22:14.815 "num_base_bdevs_discovered": 2, 00:22:14.815 "num_base_bdevs_operational": 2, 00:22:14.815 "base_bdevs_list": [ 00:22:14.815 { 00:22:14.815 "name": "spare", 00:22:14.815 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:14.815 "is_configured": true, 00:22:14.815 "data_offset": 256, 00:22:14.815 "data_size": 7936 00:22:14.815 }, 00:22:14.815 { 00:22:14.815 "name": "BaseBdev2", 00:22:14.815 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:14.815 "is_configured": true, 00:22:14.815 "data_offset": 256, 00:22:14.815 "data_size": 7936 00:22:14.815 } 00:22:14.815 ] 00:22:14.815 }' 00:22:14.815 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.815 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:15.073 [2024-11-27 14:21:45.546560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.073 [2024-11-27 14:21:45.546746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.073 [2024-11-27 14:21:45.546898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.073 [2024-11-27 14:21:45.546998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.073 [2024-11-27 14:21:45.547019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:15.073 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.332 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:15.591 /dev/nbd0 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.591 1+0 records in 00:22:15.591 1+0 records out 00:22:15.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595318 s, 6.9 MB/s 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.591 14:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:15.849 /dev/nbd1 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.849 1+0 records in 00:22:15.849 1+0 records out 00:22:15.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041802 s, 9.8 MB/s 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.849 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.108 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.367 14:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.626 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.884 [2024-11-27 14:21:47.137848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:16.885 [2024-11-27 14:21:47.138058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.885 [2024-11-27 14:21:47.138111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:16.885 [2024-11-27 14:21:47.138128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.885 [2024-11-27 14:21:47.141145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.885 [2024-11-27 14:21:47.141192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:16.885 [2024-11-27 14:21:47.141315] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:16.885 [2024-11-27 14:21:47.141396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:16.885 [2024-11-27 14:21:47.141591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.885 spare 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.885 [2024-11-27 14:21:47.241737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:16.885 [2024-11-27 14:21:47.241809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:16.885 [2024-11-27 14:21:47.242286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:16.885 [2024-11-27 14:21:47.242596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:16.885 [2024-11-27 14:21:47.242614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:16.885 [2024-11-27 14:21:47.242908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.885 "name": "raid_bdev1", 00:22:16.885 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:16.885 "strip_size_kb": 0, 00:22:16.885 "state": "online", 00:22:16.885 "raid_level": "raid1", 00:22:16.885 "superblock": true, 00:22:16.885 "num_base_bdevs": 2, 00:22:16.885 "num_base_bdevs_discovered": 2, 00:22:16.885 "num_base_bdevs_operational": 2, 00:22:16.885 "base_bdevs_list": [ 00:22:16.885 { 00:22:16.885 "name": "spare", 00:22:16.885 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:16.885 "is_configured": true, 00:22:16.885 "data_offset": 256, 00:22:16.885 "data_size": 7936 00:22:16.885 }, 00:22:16.885 { 00:22:16.885 "name": "BaseBdev2", 00:22:16.885 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:16.885 "is_configured": true, 00:22:16.885 "data_offset": 256, 00:22:16.885 "data_size": 7936 00:22:16.885 } 00:22:16.885 ] 00:22:16.885 }' 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.885 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.453 "name": "raid_bdev1", 00:22:17.453 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:17.453 "strip_size_kb": 0, 00:22:17.453 "state": "online", 00:22:17.453 "raid_level": "raid1", 00:22:17.453 "superblock": true, 00:22:17.453 "num_base_bdevs": 2, 00:22:17.453 "num_base_bdevs_discovered": 2, 00:22:17.453 "num_base_bdevs_operational": 2, 00:22:17.453 "base_bdevs_list": [ 00:22:17.453 { 00:22:17.453 "name": "spare", 00:22:17.453 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:17.453 "is_configured": true, 00:22:17.453 "data_offset": 256, 00:22:17.453 "data_size": 7936 00:22:17.453 }, 00:22:17.453 { 00:22:17.453 "name": "BaseBdev2", 00:22:17.453 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:17.453 "is_configured": true, 00:22:17.453 "data_offset": 256, 00:22:17.453 "data_size": 7936 00:22:17.453 } 00:22:17.453 ] 00:22:17.453 }' 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:17.453 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.711 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:17.711 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.711 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.711 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.711 14:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.711 [2024-11-27 14:21:48.047115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.711 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.712 "name": "raid_bdev1", 00:22:17.712 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:17.712 "strip_size_kb": 0, 00:22:17.712 "state": "online", 00:22:17.712 "raid_level": "raid1", 00:22:17.712 "superblock": true, 00:22:17.712 "num_base_bdevs": 2, 00:22:17.712 "num_base_bdevs_discovered": 1, 00:22:17.712 "num_base_bdevs_operational": 1, 00:22:17.712 "base_bdevs_list": [ 00:22:17.712 { 00:22:17.712 "name": null, 00:22:17.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.712 "is_configured": false, 00:22:17.712 "data_offset": 0, 00:22:17.712 "data_size": 7936 00:22:17.712 }, 00:22:17.712 { 00:22:17.712 "name": "BaseBdev2", 00:22:17.712 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:17.712 "is_configured": true, 00:22:17.712 "data_offset": 256, 00:22:17.712 "data_size": 7936 00:22:17.712 } 00:22:17.712 ] 00:22:17.712 }' 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.712 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.277 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:18.277 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.277 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.277 [2024-11-27 14:21:48.555275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:18.277 [2024-11-27 14:21:48.555711] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:18.277 [2024-11-27 14:21:48.555747] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:18.277 [2024-11-27 14:21:48.555813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:18.277 [2024-11-27 14:21:48.571416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:18.277 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.277 14:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:18.277 [2024-11-27 14:21:48.574240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.221 "name": "raid_bdev1", 00:22:19.221 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:19.221 "strip_size_kb": 0, 00:22:19.221 "state": "online", 00:22:19.221 "raid_level": "raid1", 00:22:19.221 "superblock": true, 00:22:19.221 "num_base_bdevs": 2, 00:22:19.221 "num_base_bdevs_discovered": 2, 00:22:19.221 "num_base_bdevs_operational": 2, 00:22:19.221 "process": { 00:22:19.221 "type": "rebuild", 00:22:19.221 "target": "spare", 00:22:19.221 "progress": { 00:22:19.221 "blocks": 2560, 00:22:19.221 "percent": 32 00:22:19.221 } 00:22:19.221 }, 00:22:19.221 "base_bdevs_list": [ 00:22:19.221 { 00:22:19.221 "name": "spare", 00:22:19.221 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:19.221 "is_configured": true, 00:22:19.221 "data_offset": 256, 00:22:19.221 "data_size": 7936 00:22:19.221 }, 00:22:19.221 { 00:22:19.221 "name": "BaseBdev2", 00:22:19.221 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:19.221 "is_configured": true, 00:22:19.221 "data_offset": 256, 00:22:19.221 "data_size": 7936 00:22:19.221 } 00:22:19.221 ] 00:22:19.221 }' 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.221 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.221 [2024-11-27 14:21:49.715924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:19.480 [2024-11-27 14:21:49.783892] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:19.480 [2024-11-27 14:21:49.784290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.480 [2024-11-27 14:21:49.784327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:19.480 [2024-11-27 14:21:49.784348] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.480 "name": "raid_bdev1", 00:22:19.480 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:19.480 "strip_size_kb": 0, 00:22:19.480 "state": "online", 00:22:19.480 "raid_level": "raid1", 00:22:19.480 "superblock": true, 00:22:19.480 "num_base_bdevs": 2, 00:22:19.480 "num_base_bdevs_discovered": 1, 00:22:19.480 "num_base_bdevs_operational": 1, 00:22:19.480 "base_bdevs_list": [ 00:22:19.480 { 00:22:19.480 "name": null, 00:22:19.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.480 "is_configured": false, 00:22:19.480 "data_offset": 0, 00:22:19.480 "data_size": 7936 00:22:19.480 }, 00:22:19.480 { 00:22:19.480 "name": "BaseBdev2", 00:22:19.480 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:19.480 "is_configured": true, 00:22:19.480 "data_offset": 256, 00:22:19.480 "data_size": 7936 00:22:19.480 } 00:22:19.480 ] 00:22:19.480 }' 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.480 14:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.046 14:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:20.046 14:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.046 14:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.046 [2024-11-27 14:21:50.292239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.046 [2024-11-27 14:21:50.292329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.046 [2024-11-27 14:21:50.292363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:20.046 [2024-11-27 14:21:50.292382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.046 [2024-11-27 14:21:50.293039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.046 [2024-11-27 14:21:50.293080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.046 [2024-11-27 14:21:50.293212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:20.046 [2024-11-27 14:21:50.293238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:20.046 [2024-11-27 14:21:50.293257] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:20.046 [2024-11-27 14:21:50.293293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:20.046 spare 00:22:20.046 [2024-11-27 14:21:50.308873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:20.046 14:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.046 14:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:20.046 [2024-11-27 14:21:50.311448] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.981 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.981 "name": "raid_bdev1", 00:22:20.981 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:20.981 "strip_size_kb": 0, 00:22:20.981 "state": "online", 00:22:20.982 "raid_level": "raid1", 00:22:20.982 "superblock": true, 00:22:20.982 "num_base_bdevs": 2, 00:22:20.982 "num_base_bdevs_discovered": 2, 00:22:20.982 "num_base_bdevs_operational": 2, 00:22:20.982 "process": { 00:22:20.982 "type": "rebuild", 00:22:20.982 "target": "spare", 00:22:20.982 "progress": { 00:22:20.982 "blocks": 2560, 00:22:20.982 "percent": 32 00:22:20.982 } 00:22:20.982 }, 00:22:20.982 "base_bdevs_list": [ 00:22:20.982 { 00:22:20.982 "name": "spare", 00:22:20.982 "uuid": "523ffd0d-7efc-5691-abb2-c580c205ca2e", 00:22:20.982 "is_configured": true, 00:22:20.982 "data_offset": 256, 00:22:20.982 "data_size": 7936 00:22:20.982 }, 00:22:20.982 { 00:22:20.982 "name": "BaseBdev2", 00:22:20.982 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:20.982 "is_configured": true, 00:22:20.982 "data_offset": 256, 00:22:20.982 "data_size": 7936 00:22:20.982 } 00:22:20.982 ] 00:22:20.982 }' 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.982 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.982 [2024-11-27 14:21:51.464852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:21.240 [2024-11-27 14:21:51.520728] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:21.240 [2024-11-27 14:21:51.520986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.240 [2024-11-27 14:21:51.521132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:21.240 [2024-11-27 14:21:51.521158] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.240 "name": "raid_bdev1", 00:22:21.240 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:21.240 "strip_size_kb": 0, 00:22:21.240 "state": "online", 00:22:21.240 "raid_level": "raid1", 00:22:21.240 "superblock": true, 00:22:21.240 "num_base_bdevs": 2, 00:22:21.240 "num_base_bdevs_discovered": 1, 00:22:21.240 "num_base_bdevs_operational": 1, 00:22:21.240 "base_bdevs_list": [ 00:22:21.240 { 00:22:21.240 "name": null, 00:22:21.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.240 "is_configured": false, 00:22:21.240 "data_offset": 0, 00:22:21.240 "data_size": 7936 00:22:21.240 }, 00:22:21.240 { 00:22:21.240 "name": "BaseBdev2", 00:22:21.240 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:21.240 "is_configured": true, 00:22:21.240 "data_offset": 256, 00:22:21.240 "data_size": 7936 00:22:21.240 } 00:22:21.240 ] 00:22:21.240 }' 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.240 14:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.807 "name": "raid_bdev1", 00:22:21.807 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:21.807 "strip_size_kb": 0, 00:22:21.807 "state": "online", 00:22:21.807 "raid_level": "raid1", 00:22:21.807 "superblock": true, 00:22:21.807 "num_base_bdevs": 2, 00:22:21.807 "num_base_bdevs_discovered": 1, 00:22:21.807 "num_base_bdevs_operational": 1, 00:22:21.807 "base_bdevs_list": [ 00:22:21.807 { 00:22:21.807 "name": null, 00:22:21.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.807 "is_configured": false, 00:22:21.807 "data_offset": 0, 00:22:21.807 "data_size": 7936 00:22:21.807 }, 00:22:21.807 { 00:22:21.807 "name": "BaseBdev2", 00:22:21.807 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:21.807 "is_configured": true, 00:22:21.807 "data_offset": 256, 00:22:21.807 "data_size": 7936 00:22:21.807 } 00:22:21.807 ] 00:22:21.807 }' 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 [2024-11-27 14:21:52.293367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:21.807 [2024-11-27 14:21:52.293564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.807 [2024-11-27 14:21:52.293614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:21.807 [2024-11-27 14:21:52.293645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.807 [2024-11-27 14:21:52.294274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.807 [2024-11-27 14:21:52.294302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.807 [2024-11-27 14:21:52.294422] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:21.807 [2024-11-27 14:21:52.294445] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:21.807 [2024-11-27 14:21:52.294459] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:21.807 [2024-11-27 14:21:52.294473] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:21.807 BaseBdev1 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.807 14:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.182 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.182 "name": "raid_bdev1", 00:22:23.182 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:23.182 "strip_size_kb": 0, 00:22:23.182 "state": "online", 00:22:23.182 "raid_level": "raid1", 00:22:23.182 "superblock": true, 00:22:23.182 "num_base_bdevs": 2, 00:22:23.182 "num_base_bdevs_discovered": 1, 00:22:23.182 "num_base_bdevs_operational": 1, 00:22:23.182 "base_bdevs_list": [ 00:22:23.182 { 00:22:23.182 "name": null, 00:22:23.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.183 "is_configured": false, 00:22:23.183 "data_offset": 0, 00:22:23.183 "data_size": 7936 00:22:23.183 }, 00:22:23.183 { 00:22:23.183 "name": "BaseBdev2", 00:22:23.183 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:23.183 "is_configured": true, 00:22:23.183 "data_offset": 256, 00:22:23.183 "data_size": 7936 00:22:23.183 } 00:22:23.183 ] 00:22:23.183 }' 00:22:23.183 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.183 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.441 "name": "raid_bdev1", 00:22:23.441 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:23.441 "strip_size_kb": 0, 00:22:23.441 "state": "online", 00:22:23.441 "raid_level": "raid1", 00:22:23.441 "superblock": true, 00:22:23.441 "num_base_bdevs": 2, 00:22:23.441 "num_base_bdevs_discovered": 1, 00:22:23.441 "num_base_bdevs_operational": 1, 00:22:23.441 "base_bdevs_list": [ 00:22:23.441 { 00:22:23.441 "name": null, 00:22:23.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.441 "is_configured": false, 00:22:23.441 "data_offset": 0, 00:22:23.441 "data_size": 7936 00:22:23.441 }, 00:22:23.441 { 00:22:23.441 "name": "BaseBdev2", 00:22:23.441 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:23.441 "is_configured": true, 00:22:23.441 "data_offset": 256, 00:22:23.441 "data_size": 7936 00:22:23.441 } 00:22:23.441 ] 00:22:23.441 }' 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.441 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.701 [2024-11-27 14:21:53.986023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:23.701 [2024-11-27 14:21:53.986389] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:23.701 [2024-11-27 14:21:53.986558] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:23.701 request: 00:22:23.701 { 00:22:23.701 "base_bdev": "BaseBdev1", 00:22:23.701 "raid_bdev": "raid_bdev1", 00:22:23.701 "method": "bdev_raid_add_base_bdev", 00:22:23.701 "req_id": 1 00:22:23.701 } 00:22:23.701 Got JSON-RPC error response 00:22:23.701 response: 00:22:23.701 { 00:22:23.701 "code": -22, 00:22:23.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:23.701 } 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.701 14:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.644 14:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.644 "name": "raid_bdev1", 00:22:24.644 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:24.644 "strip_size_kb": 0, 00:22:24.644 "state": "online", 00:22:24.644 "raid_level": "raid1", 00:22:24.644 "superblock": true, 00:22:24.644 "num_base_bdevs": 2, 00:22:24.644 "num_base_bdevs_discovered": 1, 00:22:24.644 "num_base_bdevs_operational": 1, 00:22:24.644 "base_bdevs_list": [ 00:22:24.644 { 00:22:24.644 "name": null, 00:22:24.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.644 "is_configured": false, 00:22:24.644 "data_offset": 0, 00:22:24.644 "data_size": 7936 00:22:24.644 }, 00:22:24.644 { 00:22:24.644 "name": "BaseBdev2", 00:22:24.644 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:24.644 "is_configured": true, 00:22:24.644 "data_offset": 256, 00:22:24.644 "data_size": 7936 00:22:24.644 } 00:22:24.644 ] 00:22:24.644 }' 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.644 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.210 "name": "raid_bdev1", 00:22:25.210 "uuid": "e8c7119e-a2cd-46f4-95e2-06cefb07eea6", 00:22:25.210 "strip_size_kb": 0, 00:22:25.210 "state": "online", 00:22:25.210 "raid_level": "raid1", 00:22:25.210 "superblock": true, 00:22:25.210 "num_base_bdevs": 2, 00:22:25.210 "num_base_bdevs_discovered": 1, 00:22:25.210 "num_base_bdevs_operational": 1, 00:22:25.210 "base_bdevs_list": [ 00:22:25.210 { 00:22:25.210 "name": null, 00:22:25.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.210 "is_configured": false, 00:22:25.210 "data_offset": 0, 00:22:25.210 "data_size": 7936 00:22:25.210 }, 00:22:25.210 { 00:22:25.210 "name": "BaseBdev2", 00:22:25.210 "uuid": "df5d66b7-fc64-5fca-96eb-a69f6d319c07", 00:22:25.210 "is_configured": true, 00:22:25.210 "data_offset": 256, 00:22:25.210 "data_size": 7936 00:22:25.210 } 00:22:25.210 ] 00:22:25.210 }' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87174 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87174 ']' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87174 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87174 00:22:25.210 killing process with pid 87174 00:22:25.210 Received shutdown signal, test time was about 60.000000 seconds 00:22:25.210 00:22:25.210 Latency(us) 00:22:25.210 [2024-11-27T14:21:55.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.210 [2024-11-27T14:21:55.723Z] =================================================================================================================== 00:22:25.210 [2024-11-27T14:21:55.723Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87174' 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87174 00:22:25.210 [2024-11-27 14:21:55.720534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.210 14:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87174 00:22:25.210 [2024-11-27 14:21:55.720689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.210 [2024-11-27 14:21:55.720764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.210 [2024-11-27 14:21:55.720784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:25.777 [2024-11-27 14:21:55.992843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.769 ************************************ 00:22:26.769 END TEST raid_rebuild_test_sb_4k 00:22:26.769 ************************************ 00:22:26.769 14:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:22:26.769 00:22:26.769 real 0m21.873s 00:22:26.769 user 0m29.598s 00:22:26.769 sys 0m2.517s 00:22:26.769 14:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.769 14:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.769 14:21:57 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:22:26.769 14:21:57 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:26.769 14:21:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:26.769 14:21:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.769 14:21:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.769 ************************************ 00:22:26.769 START TEST raid_state_function_test_sb_md_separate 00:22:26.769 ************************************ 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87878 00:22:26.769 Process raid pid: 87878 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87878' 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87878 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87878 ']' 00:22:26.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.769 14:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.769 [2024-11-27 14:21:57.206531] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:22:26.769 [2024-11-27 14:21:57.206930] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.027 [2024-11-27 14:21:57.397888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.286 [2024-11-27 14:21:57.554803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.286 [2024-11-27 14:21:57.768567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.286 [2024-11-27 14:21:57.768879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.873 [2024-11-27 14:21:58.204131] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:27.873 [2024-11-27 14:21:58.204351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:27.873 [2024-11-27 14:21:58.204479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:27.873 [2024-11-27 14:21:58.204515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.873 "name": "Existed_Raid", 00:22:27.873 "uuid": "c04cb63b-9884-4509-8966-f3a5c010f3d1", 00:22:27.873 "strip_size_kb": 0, 00:22:27.873 "state": "configuring", 00:22:27.873 "raid_level": "raid1", 00:22:27.873 "superblock": true, 00:22:27.873 "num_base_bdevs": 2, 00:22:27.873 "num_base_bdevs_discovered": 0, 00:22:27.873 "num_base_bdevs_operational": 2, 00:22:27.873 "base_bdevs_list": [ 00:22:27.873 { 00:22:27.873 "name": "BaseBdev1", 00:22:27.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.873 "is_configured": false, 00:22:27.873 "data_offset": 0, 00:22:27.873 "data_size": 0 00:22:27.873 }, 00:22:27.873 { 00:22:27.873 "name": "BaseBdev2", 00:22:27.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.873 "is_configured": false, 00:22:27.873 "data_offset": 0, 00:22:27.873 "data_size": 0 00:22:27.873 } 00:22:27.873 ] 00:22:27.873 }' 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.873 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.439 [2024-11-27 14:21:58.704173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:28.439 [2024-11-27 14:21:58.704360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.439 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.440 [2024-11-27 14:21:58.712130] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:28.440 [2024-11-27 14:21:58.712306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:28.440 [2024-11-27 14:21:58.712427] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.440 [2024-11-27 14:21:58.712562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.440 BaseBdev1 00:22:28.440 [2024-11-27 14:21:58.758090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.440 [ 00:22:28.440 { 00:22:28.440 "name": "BaseBdev1", 00:22:28.440 "aliases": [ 00:22:28.440 "e2351c97-54da-4c34-a00c-07b2b99c4df7" 00:22:28.440 ], 00:22:28.440 "product_name": "Malloc disk", 00:22:28.440 "block_size": 4096, 00:22:28.440 "num_blocks": 8192, 00:22:28.440 "uuid": "e2351c97-54da-4c34-a00c-07b2b99c4df7", 00:22:28.440 "md_size": 32, 00:22:28.440 "md_interleave": false, 00:22:28.440 "dif_type": 0, 00:22:28.440 "assigned_rate_limits": { 00:22:28.440 "rw_ios_per_sec": 0, 00:22:28.440 "rw_mbytes_per_sec": 0, 00:22:28.440 "r_mbytes_per_sec": 0, 00:22:28.440 "w_mbytes_per_sec": 0 00:22:28.440 }, 00:22:28.440 "claimed": true, 00:22:28.440 "claim_type": "exclusive_write", 00:22:28.440 "zoned": false, 00:22:28.440 "supported_io_types": { 00:22:28.440 "read": true, 00:22:28.440 "write": true, 00:22:28.440 "unmap": true, 00:22:28.440 "flush": true, 00:22:28.440 "reset": true, 00:22:28.440 "nvme_admin": false, 00:22:28.440 "nvme_io": false, 00:22:28.440 "nvme_io_md": false, 00:22:28.440 "write_zeroes": true, 00:22:28.440 "zcopy": true, 00:22:28.440 "get_zone_info": false, 00:22:28.440 "zone_management": false, 00:22:28.440 "zone_append": false, 00:22:28.440 "compare": false, 00:22:28.440 "compare_and_write": false, 00:22:28.440 "abort": true, 00:22:28.440 "seek_hole": false, 00:22:28.440 "seek_data": false, 00:22:28.440 "copy": true, 00:22:28.440 "nvme_iov_md": false 00:22:28.440 }, 00:22:28.440 "memory_domains": [ 00:22:28.440 { 00:22:28.440 "dma_device_id": "system", 00:22:28.440 "dma_device_type": 1 00:22:28.440 }, 00:22:28.440 { 00:22:28.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.440 "dma_device_type": 2 00:22:28.440 } 00:22:28.440 ], 00:22:28.440 "driver_specific": {} 00:22:28.440 } 00:22:28.440 ] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.440 "name": "Existed_Raid", 00:22:28.440 "uuid": "b0c7dd1c-58f2-4c42-ac6b-69fda609bbca", 00:22:28.440 "strip_size_kb": 0, 00:22:28.440 "state": "configuring", 00:22:28.440 "raid_level": "raid1", 00:22:28.440 "superblock": true, 00:22:28.440 "num_base_bdevs": 2, 00:22:28.440 "num_base_bdevs_discovered": 1, 00:22:28.440 "num_base_bdevs_operational": 2, 00:22:28.440 "base_bdevs_list": [ 00:22:28.440 { 00:22:28.440 "name": "BaseBdev1", 00:22:28.440 "uuid": "e2351c97-54da-4c34-a00c-07b2b99c4df7", 00:22:28.440 "is_configured": true, 00:22:28.440 "data_offset": 256, 00:22:28.440 "data_size": 7936 00:22:28.440 }, 00:22:28.440 { 00:22:28.440 "name": "BaseBdev2", 00:22:28.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.440 "is_configured": false, 00:22:28.440 "data_offset": 0, 00:22:28.440 "data_size": 0 00:22:28.440 } 00:22:28.440 ] 00:22:28.440 }' 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.440 14:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.007 [2024-11-27 14:21:59.310328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:29.007 [2024-11-27 14:21:59.310544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.007 [2024-11-27 14:21:59.322381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.007 [2024-11-27 14:21:59.325008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:29.007 [2024-11-27 14:21:59.325188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.007 "name": "Existed_Raid", 00:22:29.007 "uuid": "6534f97a-29a1-4323-9156-a5f542af8e2b", 00:22:29.007 "strip_size_kb": 0, 00:22:29.007 "state": "configuring", 00:22:29.007 "raid_level": "raid1", 00:22:29.007 "superblock": true, 00:22:29.007 "num_base_bdevs": 2, 00:22:29.007 "num_base_bdevs_discovered": 1, 00:22:29.007 "num_base_bdevs_operational": 2, 00:22:29.007 "base_bdevs_list": [ 00:22:29.007 { 00:22:29.007 "name": "BaseBdev1", 00:22:29.007 "uuid": "e2351c97-54da-4c34-a00c-07b2b99c4df7", 00:22:29.007 "is_configured": true, 00:22:29.007 "data_offset": 256, 00:22:29.007 "data_size": 7936 00:22:29.007 }, 00:22:29.007 { 00:22:29.007 "name": "BaseBdev2", 00:22:29.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.007 "is_configured": false, 00:22:29.007 "data_offset": 0, 00:22:29.007 "data_size": 0 00:22:29.007 } 00:22:29.007 ] 00:22:29.007 }' 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.007 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 [2024-11-27 14:21:59.871432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:29.575 BaseBdev2 00:22:29.575 [2024-11-27 14:21:59.871977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:29.575 [2024-11-27 14:21:59.872016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:29.575 [2024-11-27 14:21:59.872119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:29.575 [2024-11-27 14:21:59.872302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:29.575 [2024-11-27 14:21:59.872326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:29.575 [2024-11-27 14:21:59.872445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 [ 00:22:29.575 { 00:22:29.575 "name": "BaseBdev2", 00:22:29.575 "aliases": [ 00:22:29.575 "03a8274a-1b85-48d0-8cef-67f862389673" 00:22:29.575 ], 00:22:29.575 "product_name": "Malloc disk", 00:22:29.575 "block_size": 4096, 00:22:29.575 "num_blocks": 8192, 00:22:29.575 "uuid": "03a8274a-1b85-48d0-8cef-67f862389673", 00:22:29.575 "md_size": 32, 00:22:29.575 "md_interleave": false, 00:22:29.575 "dif_type": 0, 00:22:29.575 "assigned_rate_limits": { 00:22:29.575 "rw_ios_per_sec": 0, 00:22:29.575 "rw_mbytes_per_sec": 0, 00:22:29.575 "r_mbytes_per_sec": 0, 00:22:29.575 "w_mbytes_per_sec": 0 00:22:29.575 }, 00:22:29.575 "claimed": true, 00:22:29.575 "claim_type": "exclusive_write", 00:22:29.575 "zoned": false, 00:22:29.575 "supported_io_types": { 00:22:29.575 "read": true, 00:22:29.575 "write": true, 00:22:29.575 "unmap": true, 00:22:29.575 "flush": true, 00:22:29.575 "reset": true, 00:22:29.575 "nvme_admin": false, 00:22:29.575 "nvme_io": false, 00:22:29.575 "nvme_io_md": false, 00:22:29.575 "write_zeroes": true, 00:22:29.575 "zcopy": true, 00:22:29.575 "get_zone_info": false, 00:22:29.575 "zone_management": false, 00:22:29.575 "zone_append": false, 00:22:29.575 "compare": false, 00:22:29.575 "compare_and_write": false, 00:22:29.575 "abort": true, 00:22:29.575 "seek_hole": false, 00:22:29.575 "seek_data": false, 00:22:29.575 "copy": true, 00:22:29.575 "nvme_iov_md": false 00:22:29.575 }, 00:22:29.575 "memory_domains": [ 00:22:29.575 { 00:22:29.575 "dma_device_id": "system", 00:22:29.575 "dma_device_type": 1 00:22:29.575 }, 00:22:29.575 { 00:22:29.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.575 "dma_device_type": 2 00:22:29.575 } 00:22:29.575 ], 00:22:29.575 "driver_specific": {} 00:22:29.575 } 00:22:29.575 ] 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.575 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.575 "name": "Existed_Raid", 00:22:29.575 "uuid": "6534f97a-29a1-4323-9156-a5f542af8e2b", 00:22:29.575 "strip_size_kb": 0, 00:22:29.575 "state": "online", 00:22:29.575 "raid_level": "raid1", 00:22:29.575 "superblock": true, 00:22:29.575 "num_base_bdevs": 2, 00:22:29.575 "num_base_bdevs_discovered": 2, 00:22:29.575 "num_base_bdevs_operational": 2, 00:22:29.575 "base_bdevs_list": [ 00:22:29.575 { 00:22:29.575 "name": "BaseBdev1", 00:22:29.575 "uuid": "e2351c97-54da-4c34-a00c-07b2b99c4df7", 00:22:29.575 "is_configured": true, 00:22:29.575 "data_offset": 256, 00:22:29.575 "data_size": 7936 00:22:29.575 }, 00:22:29.575 { 00:22:29.576 "name": "BaseBdev2", 00:22:29.576 "uuid": "03a8274a-1b85-48d0-8cef-67f862389673", 00:22:29.576 "is_configured": true, 00:22:29.576 "data_offset": 256, 00:22:29.576 "data_size": 7936 00:22:29.576 } 00:22:29.576 ] 00:22:29.576 }' 00:22:29.576 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.576 14:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:30.144 [2024-11-27 14:22:00.444091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.144 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:30.144 "name": "Existed_Raid", 00:22:30.144 "aliases": [ 00:22:30.144 "6534f97a-29a1-4323-9156-a5f542af8e2b" 00:22:30.144 ], 00:22:30.144 "product_name": "Raid Volume", 00:22:30.144 "block_size": 4096, 00:22:30.144 "num_blocks": 7936, 00:22:30.144 "uuid": "6534f97a-29a1-4323-9156-a5f542af8e2b", 00:22:30.144 "md_size": 32, 00:22:30.144 "md_interleave": false, 00:22:30.144 "dif_type": 0, 00:22:30.144 "assigned_rate_limits": { 00:22:30.144 "rw_ios_per_sec": 0, 00:22:30.144 "rw_mbytes_per_sec": 0, 00:22:30.144 "r_mbytes_per_sec": 0, 00:22:30.144 "w_mbytes_per_sec": 0 00:22:30.144 }, 00:22:30.144 "claimed": false, 00:22:30.144 "zoned": false, 00:22:30.144 "supported_io_types": { 00:22:30.144 "read": true, 00:22:30.144 "write": true, 00:22:30.144 "unmap": false, 00:22:30.144 "flush": false, 00:22:30.144 "reset": true, 00:22:30.144 "nvme_admin": false, 00:22:30.144 "nvme_io": false, 00:22:30.144 "nvme_io_md": false, 00:22:30.144 "write_zeroes": true, 00:22:30.144 "zcopy": false, 00:22:30.144 "get_zone_info": false, 00:22:30.144 "zone_management": false, 00:22:30.144 "zone_append": false, 00:22:30.144 "compare": false, 00:22:30.144 "compare_and_write": false, 00:22:30.144 "abort": false, 00:22:30.144 "seek_hole": false, 00:22:30.144 "seek_data": false, 00:22:30.144 "copy": false, 00:22:30.144 "nvme_iov_md": false 00:22:30.144 }, 00:22:30.144 "memory_domains": [ 00:22:30.144 { 00:22:30.144 "dma_device_id": "system", 00:22:30.144 "dma_device_type": 1 00:22:30.144 }, 00:22:30.144 { 00:22:30.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.144 "dma_device_type": 2 00:22:30.144 }, 00:22:30.144 { 00:22:30.144 "dma_device_id": "system", 00:22:30.144 "dma_device_type": 1 00:22:30.144 }, 00:22:30.144 { 00:22:30.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.144 "dma_device_type": 2 00:22:30.144 } 00:22:30.144 ], 00:22:30.144 "driver_specific": { 00:22:30.144 "raid": { 00:22:30.144 "uuid": "6534f97a-29a1-4323-9156-a5f542af8e2b", 00:22:30.144 "strip_size_kb": 0, 00:22:30.144 "state": "online", 00:22:30.144 "raid_level": "raid1", 00:22:30.144 "superblock": true, 00:22:30.144 "num_base_bdevs": 2, 00:22:30.144 "num_base_bdevs_discovered": 2, 00:22:30.144 "num_base_bdevs_operational": 2, 00:22:30.144 "base_bdevs_list": [ 00:22:30.144 { 00:22:30.144 "name": "BaseBdev1", 00:22:30.144 "uuid": "e2351c97-54da-4c34-a00c-07b2b99c4df7", 00:22:30.144 "is_configured": true, 00:22:30.144 "data_offset": 256, 00:22:30.144 "data_size": 7936 00:22:30.144 }, 00:22:30.144 { 00:22:30.144 "name": "BaseBdev2", 00:22:30.144 "uuid": "03a8274a-1b85-48d0-8cef-67f862389673", 00:22:30.144 "is_configured": true, 00:22:30.144 "data_offset": 256, 00:22:30.144 "data_size": 7936 00:22:30.144 } 00:22:30.144 ] 00:22:30.144 } 00:22:30.144 } 00:22:30.144 }' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:30.145 BaseBdev2' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.145 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.404 [2024-11-27 14:22:00.707799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.404 "name": "Existed_Raid", 00:22:30.404 "uuid": "6534f97a-29a1-4323-9156-a5f542af8e2b", 00:22:30.404 "strip_size_kb": 0, 00:22:30.404 "state": "online", 00:22:30.404 "raid_level": "raid1", 00:22:30.404 "superblock": true, 00:22:30.404 "num_base_bdevs": 2, 00:22:30.404 "num_base_bdevs_discovered": 1, 00:22:30.404 "num_base_bdevs_operational": 1, 00:22:30.404 "base_bdevs_list": [ 00:22:30.404 { 00:22:30.404 "name": null, 00:22:30.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.404 "is_configured": false, 00:22:30.404 "data_offset": 0, 00:22:30.404 "data_size": 7936 00:22:30.404 }, 00:22:30.404 { 00:22:30.404 "name": "BaseBdev2", 00:22:30.404 "uuid": "03a8274a-1b85-48d0-8cef-67f862389673", 00:22:30.404 "is_configured": true, 00:22:30.404 "data_offset": 256, 00:22:30.404 "data_size": 7936 00:22:30.404 } 00:22:30.404 ] 00:22:30.404 }' 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.404 14:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.970 [2024-11-27 14:22:01.369488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:30.970 [2024-11-27 14:22:01.369813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:30.970 [2024-11-27 14:22:01.464835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.970 [2024-11-27 14:22:01.465115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.970 [2024-11-27 14:22:01.465151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.970 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:30.971 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:30.971 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.971 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:30.971 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.971 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.971 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87878 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87878 ']' 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87878 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87878 00:22:31.228 killing process with pid 87878 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87878' 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87878 00:22:31.228 [2024-11-27 14:22:01.550501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:31.228 14:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87878 00:22:31.228 [2024-11-27 14:22:01.565383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:32.179 14:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:22:32.179 00:22:32.179 real 0m5.538s 00:22:32.179 user 0m8.272s 00:22:32.179 sys 0m0.828s 00:22:32.179 14:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.179 ************************************ 00:22:32.179 END TEST raid_state_function_test_sb_md_separate 00:22:32.179 ************************************ 00:22:32.179 14:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.438 14:22:02 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:32.438 14:22:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:32.438 14:22:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.438 14:22:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.438 ************************************ 00:22:32.438 START TEST raid_superblock_test_md_separate 00:22:32.438 ************************************ 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:32.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88131 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88131 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88131 ']' 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.438 14:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.438 [2024-11-27 14:22:02.800491] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:22:32.438 [2024-11-27 14:22:02.800699] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88131 ] 00:22:32.697 [2024-11-27 14:22:02.989417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.697 [2024-11-27 14:22:03.125053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.963 [2024-11-27 14:22:03.334302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:32.963 [2024-11-27 14:22:03.334351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 malloc1 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.532 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 [2024-11-27 14:22:03.903785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:33.532 [2024-11-27 14:22:03.904025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.532 [2024-11-27 14:22:03.904111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:33.533 [2024-11-27 14:22:03.904311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.533 [2024-11-27 14:22:03.906911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.533 [2024-11-27 14:22:03.907070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:33.533 pt1 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.533 malloc2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.533 [2024-11-27 14:22:03.961488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:33.533 [2024-11-27 14:22:03.961710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.533 [2024-11-27 14:22:03.961785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:33.533 [2024-11-27 14:22:03.961926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.533 [2024-11-27 14:22:03.964446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.533 [2024-11-27 14:22:03.964593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:33.533 pt2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.533 [2024-11-27 14:22:03.969568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:33.533 [2024-11-27 14:22:03.972277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:33.533 [2024-11-27 14:22:03.972636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:33.533 [2024-11-27 14:22:03.972767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:33.533 [2024-11-27 14:22:03.972904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:33.533 [2024-11-27 14:22:03.973077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:33.533 [2024-11-27 14:22:03.973098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:33.533 [2024-11-27 14:22:03.973231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.533 14:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.533 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.533 "name": "raid_bdev1", 00:22:33.533 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:33.533 "strip_size_kb": 0, 00:22:33.533 "state": "online", 00:22:33.533 "raid_level": "raid1", 00:22:33.533 "superblock": true, 00:22:33.533 "num_base_bdevs": 2, 00:22:33.533 "num_base_bdevs_discovered": 2, 00:22:33.533 "num_base_bdevs_operational": 2, 00:22:33.533 "base_bdevs_list": [ 00:22:33.533 { 00:22:33.533 "name": "pt1", 00:22:33.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:33.533 "is_configured": true, 00:22:33.533 "data_offset": 256, 00:22:33.533 "data_size": 7936 00:22:33.533 }, 00:22:33.533 { 00:22:33.533 "name": "pt2", 00:22:33.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:33.533 "is_configured": true, 00:22:33.533 "data_offset": 256, 00:22:33.533 "data_size": 7936 00:22:33.533 } 00:22:33.533 ] 00:22:33.533 }' 00:22:33.533 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.533 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.099 [2024-11-27 14:22:04.486099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.099 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.100 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:34.100 "name": "raid_bdev1", 00:22:34.100 "aliases": [ 00:22:34.100 "bf7b6e25-eeea-40a3-9332-370f358e4023" 00:22:34.100 ], 00:22:34.100 "product_name": "Raid Volume", 00:22:34.100 "block_size": 4096, 00:22:34.100 "num_blocks": 7936, 00:22:34.100 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:34.100 "md_size": 32, 00:22:34.100 "md_interleave": false, 00:22:34.100 "dif_type": 0, 00:22:34.100 "assigned_rate_limits": { 00:22:34.100 "rw_ios_per_sec": 0, 00:22:34.100 "rw_mbytes_per_sec": 0, 00:22:34.100 "r_mbytes_per_sec": 0, 00:22:34.100 "w_mbytes_per_sec": 0 00:22:34.100 }, 00:22:34.100 "claimed": false, 00:22:34.100 "zoned": false, 00:22:34.100 "supported_io_types": { 00:22:34.100 "read": true, 00:22:34.100 "write": true, 00:22:34.100 "unmap": false, 00:22:34.100 "flush": false, 00:22:34.100 "reset": true, 00:22:34.100 "nvme_admin": false, 00:22:34.100 "nvme_io": false, 00:22:34.100 "nvme_io_md": false, 00:22:34.100 "write_zeroes": true, 00:22:34.100 "zcopy": false, 00:22:34.100 "get_zone_info": false, 00:22:34.100 "zone_management": false, 00:22:34.100 "zone_append": false, 00:22:34.100 "compare": false, 00:22:34.100 "compare_and_write": false, 00:22:34.100 "abort": false, 00:22:34.100 "seek_hole": false, 00:22:34.100 "seek_data": false, 00:22:34.100 "copy": false, 00:22:34.100 "nvme_iov_md": false 00:22:34.100 }, 00:22:34.100 "memory_domains": [ 00:22:34.100 { 00:22:34.100 "dma_device_id": "system", 00:22:34.100 "dma_device_type": 1 00:22:34.100 }, 00:22:34.100 { 00:22:34.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.100 "dma_device_type": 2 00:22:34.100 }, 00:22:34.100 { 00:22:34.100 "dma_device_id": "system", 00:22:34.100 "dma_device_type": 1 00:22:34.100 }, 00:22:34.100 { 00:22:34.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.100 "dma_device_type": 2 00:22:34.100 } 00:22:34.100 ], 00:22:34.100 "driver_specific": { 00:22:34.100 "raid": { 00:22:34.100 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:34.100 "strip_size_kb": 0, 00:22:34.100 "state": "online", 00:22:34.100 "raid_level": "raid1", 00:22:34.100 "superblock": true, 00:22:34.100 "num_base_bdevs": 2, 00:22:34.100 "num_base_bdevs_discovered": 2, 00:22:34.100 "num_base_bdevs_operational": 2, 00:22:34.100 "base_bdevs_list": [ 00:22:34.100 { 00:22:34.100 "name": "pt1", 00:22:34.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.100 "is_configured": true, 00:22:34.100 "data_offset": 256, 00:22:34.100 "data_size": 7936 00:22:34.100 }, 00:22:34.100 { 00:22:34.100 "name": "pt2", 00:22:34.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.100 "is_configured": true, 00:22:34.100 "data_offset": 256, 00:22:34.100 "data_size": 7936 00:22:34.100 } 00:22:34.100 ] 00:22:34.100 } 00:22:34.100 } 00:22:34.100 }' 00:22:34.100 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.100 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:34.100 pt2' 00:22:34.100 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.358 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:34.358 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:34.358 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.358 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:34.358 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.358 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:34.359 [2024-11-27 14:22:04.750125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bf7b6e25-eeea-40a3-9332-370f358e4023 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z bf7b6e25-eeea-40a3-9332-370f358e4023 ']' 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.359 [2024-11-27 14:22:04.809753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:34.359 [2024-11-27 14:22:04.809940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:34.359 [2024-11-27 14:22:04.810195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:34.359 [2024-11-27 14:22:04.810411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:34.359 [2024-11-27 14:22:04.810445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.359 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.618 [2024-11-27 14:22:04.949879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:34.618 [2024-11-27 14:22:04.952587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:34.618 [2024-11-27 14:22:04.952810] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:34.618 [2024-11-27 14:22:04.952909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:34.618 [2024-11-27 14:22:04.952938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:34.618 [2024-11-27 14:22:04.952954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:34.618 request: 00:22:34.618 { 00:22:34.618 "name": "raid_bdev1", 00:22:34.618 "raid_level": "raid1", 00:22:34.618 "base_bdevs": [ 00:22:34.618 "malloc1", 00:22:34.618 "malloc2" 00:22:34.618 ], 00:22:34.618 "superblock": false, 00:22:34.618 "method": "bdev_raid_create", 00:22:34.618 "req_id": 1 00:22:34.618 } 00:22:34.618 Got JSON-RPC error response 00:22:34.618 response: 00:22:34.618 { 00:22:34.618 "code": -17, 00:22:34.618 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:34.618 } 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:34.618 14:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.618 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:34.618 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:34.618 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:34.618 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.618 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.618 [2024-11-27 14:22:05.013821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:34.618 [2024-11-27 14:22:05.014055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.619 [2024-11-27 14:22:05.014135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:34.619 [2024-11-27 14:22:05.014267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.619 [2024-11-27 14:22:05.017133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.619 [2024-11-27 14:22:05.017228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:34.619 [2024-11-27 14:22:05.017293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:34.619 [2024-11-27 14:22:05.017378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:34.619 pt1 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.619 "name": "raid_bdev1", 00:22:34.619 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:34.619 "strip_size_kb": 0, 00:22:34.619 "state": "configuring", 00:22:34.619 "raid_level": "raid1", 00:22:34.619 "superblock": true, 00:22:34.619 "num_base_bdevs": 2, 00:22:34.619 "num_base_bdevs_discovered": 1, 00:22:34.619 "num_base_bdevs_operational": 2, 00:22:34.619 "base_bdevs_list": [ 00:22:34.619 { 00:22:34.619 "name": "pt1", 00:22:34.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:34.619 "is_configured": true, 00:22:34.619 "data_offset": 256, 00:22:34.619 "data_size": 7936 00:22:34.619 }, 00:22:34.619 { 00:22:34.619 "name": null, 00:22:34.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.619 "is_configured": false, 00:22:34.619 "data_offset": 256, 00:22:34.619 "data_size": 7936 00:22:34.619 } 00:22:34.619 ] 00:22:34.619 }' 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.619 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.185 [2024-11-27 14:22:05.538037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:35.185 [2024-11-27 14:22:05.538270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.185 [2024-11-27 14:22:05.538344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:35.185 [2024-11-27 14:22:05.538591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.185 [2024-11-27 14:22:05.538970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.185 [2024-11-27 14:22:05.539008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:35.185 [2024-11-27 14:22:05.539078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:35.185 [2024-11-27 14:22:05.539115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:35.185 [2024-11-27 14:22:05.539254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:35.185 [2024-11-27 14:22:05.539274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:35.185 [2024-11-27 14:22:05.539368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:35.185 [2024-11-27 14:22:05.539548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:35.185 [2024-11-27 14:22:05.539562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:35.185 [2024-11-27 14:22:05.539682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.185 pt2 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:35.185 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.186 "name": "raid_bdev1", 00:22:35.186 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:35.186 "strip_size_kb": 0, 00:22:35.186 "state": "online", 00:22:35.186 "raid_level": "raid1", 00:22:35.186 "superblock": true, 00:22:35.186 "num_base_bdevs": 2, 00:22:35.186 "num_base_bdevs_discovered": 2, 00:22:35.186 "num_base_bdevs_operational": 2, 00:22:35.186 "base_bdevs_list": [ 00:22:35.186 { 00:22:35.186 "name": "pt1", 00:22:35.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:35.186 "is_configured": true, 00:22:35.186 "data_offset": 256, 00:22:35.186 "data_size": 7936 00:22:35.186 }, 00:22:35.186 { 00:22:35.186 "name": "pt2", 00:22:35.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.186 "is_configured": true, 00:22:35.186 "data_offset": 256, 00:22:35.186 "data_size": 7936 00:22:35.186 } 00:22:35.186 ] 00:22:35.186 }' 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.186 14:22:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.753 [2024-11-27 14:22:06.062562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.753 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:35.753 "name": "raid_bdev1", 00:22:35.753 "aliases": [ 00:22:35.753 "bf7b6e25-eeea-40a3-9332-370f358e4023" 00:22:35.753 ], 00:22:35.753 "product_name": "Raid Volume", 00:22:35.753 "block_size": 4096, 00:22:35.753 "num_blocks": 7936, 00:22:35.753 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:35.753 "md_size": 32, 00:22:35.753 "md_interleave": false, 00:22:35.753 "dif_type": 0, 00:22:35.753 "assigned_rate_limits": { 00:22:35.753 "rw_ios_per_sec": 0, 00:22:35.753 "rw_mbytes_per_sec": 0, 00:22:35.753 "r_mbytes_per_sec": 0, 00:22:35.753 "w_mbytes_per_sec": 0 00:22:35.753 }, 00:22:35.753 "claimed": false, 00:22:35.753 "zoned": false, 00:22:35.753 "supported_io_types": { 00:22:35.753 "read": true, 00:22:35.753 "write": true, 00:22:35.753 "unmap": false, 00:22:35.753 "flush": false, 00:22:35.753 "reset": true, 00:22:35.753 "nvme_admin": false, 00:22:35.753 "nvme_io": false, 00:22:35.753 "nvme_io_md": false, 00:22:35.753 "write_zeroes": true, 00:22:35.753 "zcopy": false, 00:22:35.753 "get_zone_info": false, 00:22:35.753 "zone_management": false, 00:22:35.753 "zone_append": false, 00:22:35.753 "compare": false, 00:22:35.753 "compare_and_write": false, 00:22:35.753 "abort": false, 00:22:35.753 "seek_hole": false, 00:22:35.753 "seek_data": false, 00:22:35.753 "copy": false, 00:22:35.753 "nvme_iov_md": false 00:22:35.753 }, 00:22:35.753 "memory_domains": [ 00:22:35.753 { 00:22:35.753 "dma_device_id": "system", 00:22:35.753 "dma_device_type": 1 00:22:35.753 }, 00:22:35.753 { 00:22:35.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.753 "dma_device_type": 2 00:22:35.753 }, 00:22:35.753 { 00:22:35.753 "dma_device_id": "system", 00:22:35.753 "dma_device_type": 1 00:22:35.753 }, 00:22:35.753 { 00:22:35.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.753 "dma_device_type": 2 00:22:35.753 } 00:22:35.753 ], 00:22:35.753 "driver_specific": { 00:22:35.753 "raid": { 00:22:35.753 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:35.753 "strip_size_kb": 0, 00:22:35.753 "state": "online", 00:22:35.753 "raid_level": "raid1", 00:22:35.753 "superblock": true, 00:22:35.753 "num_base_bdevs": 2, 00:22:35.753 "num_base_bdevs_discovered": 2, 00:22:35.753 "num_base_bdevs_operational": 2, 00:22:35.754 "base_bdevs_list": [ 00:22:35.754 { 00:22:35.754 "name": "pt1", 00:22:35.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:35.754 "is_configured": true, 00:22:35.754 "data_offset": 256, 00:22:35.754 "data_size": 7936 00:22:35.754 }, 00:22:35.754 { 00:22:35.754 "name": "pt2", 00:22:35.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.754 "is_configured": true, 00:22:35.754 "data_offset": 256, 00:22:35.754 "data_size": 7936 00:22:35.754 } 00:22:35.754 ] 00:22:35.754 } 00:22:35.754 } 00:22:35.754 }' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:35.754 pt2' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.754 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.012 [2024-11-27 14:22:06.318615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' bf7b6e25-eeea-40a3-9332-370f358e4023 '!=' bf7b6e25-eeea-40a3-9332-370f358e4023 ']' 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.012 [2024-11-27 14:22:06.370309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.012 "name": "raid_bdev1", 00:22:36.012 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:36.012 "strip_size_kb": 0, 00:22:36.012 "state": "online", 00:22:36.012 "raid_level": "raid1", 00:22:36.012 "superblock": true, 00:22:36.012 "num_base_bdevs": 2, 00:22:36.012 "num_base_bdevs_discovered": 1, 00:22:36.012 "num_base_bdevs_operational": 1, 00:22:36.012 "base_bdevs_list": [ 00:22:36.012 { 00:22:36.012 "name": null, 00:22:36.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.012 "is_configured": false, 00:22:36.012 "data_offset": 0, 00:22:36.012 "data_size": 7936 00:22:36.012 }, 00:22:36.012 { 00:22:36.012 "name": "pt2", 00:22:36.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.012 "is_configured": true, 00:22:36.012 "data_offset": 256, 00:22:36.012 "data_size": 7936 00:22:36.012 } 00:22:36.012 ] 00:22:36.012 }' 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.012 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.582 [2024-11-27 14:22:06.886533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.582 [2024-11-27 14:22:06.886765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.582 [2024-11-27 14:22:06.886918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.582 [2024-11-27 14:22:06.887000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.582 [2024-11-27 14:22:06.887021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.582 [2024-11-27 14:22:06.962577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:36.582 [2024-11-27 14:22:06.962867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.582 [2024-11-27 14:22:06.963011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:36.582 [2024-11-27 14:22:06.963138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.582 [2024-11-27 14:22:06.965979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.582 [2024-11-27 14:22:06.966139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:36.582 [2024-11-27 14:22:06.966328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:36.582 [2024-11-27 14:22:06.966509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:36.582 [2024-11-27 14:22:06.966740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:36.582 [2024-11-27 14:22:06.966914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:36.582 [2024-11-27 14:22:06.967107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:36.582 [2024-11-27 14:22:06.967381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:36.582 pt2 00:22:36.582 [2024-11-27 14:22:06.967504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:36.582 [2024-11-27 14:22:06.967701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.582 14:22:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.582 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.582 "name": "raid_bdev1", 00:22:36.582 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:36.582 "strip_size_kb": 0, 00:22:36.582 "state": "online", 00:22:36.582 "raid_level": "raid1", 00:22:36.582 "superblock": true, 00:22:36.582 "num_base_bdevs": 2, 00:22:36.582 "num_base_bdevs_discovered": 1, 00:22:36.582 "num_base_bdevs_operational": 1, 00:22:36.582 "base_bdevs_list": [ 00:22:36.582 { 00:22:36.582 "name": null, 00:22:36.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.582 "is_configured": false, 00:22:36.582 "data_offset": 256, 00:22:36.582 "data_size": 7936 00:22:36.582 }, 00:22:36.582 { 00:22:36.582 "name": "pt2", 00:22:36.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.582 "is_configured": true, 00:22:36.582 "data_offset": 256, 00:22:36.582 "data_size": 7936 00:22:36.582 } 00:22:36.582 ] 00:22:36.582 }' 00:22:36.582 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.582 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 [2024-11-27 14:22:07.494998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.151 [2024-11-27 14:22:07.495168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.151 [2024-11-27 14:22:07.495278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.151 [2024-11-27 14:22:07.495367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.151 [2024-11-27 14:22:07.495382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 [2024-11-27 14:22:07.563055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:37.151 [2024-11-27 14:22:07.563237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.151 [2024-11-27 14:22:07.563327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:37.151 [2024-11-27 14:22:07.563447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.151 [2024-11-27 14:22:07.566150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.151 [2024-11-27 14:22:07.566368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:37.151 [2024-11-27 14:22:07.566554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:37.151 [2024-11-27 14:22:07.566715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:37.151 pt1 00:22:37.151 [2024-11-27 14:22:07.567036] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:37.151 [2024-11-27 14:22:07.567057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.151 [2024-11-27 14:22:07.567080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:37.151 [2024-11-27 14:22:07.567165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.151 [2024-11-27 14:22:07.567317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:37.151 [2024-11-27 14:22:07.567343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:37.151 [2024-11-27 14:22:07.567422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:37.151 [2024-11-27 14:22:07.567561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:37.151 [2024-11-27 14:22:07.567579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:37.151 [2024-11-27 14:22:07.567714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.151 "name": "raid_bdev1", 00:22:37.151 "uuid": "bf7b6e25-eeea-40a3-9332-370f358e4023", 00:22:37.151 "strip_size_kb": 0, 00:22:37.151 "state": "online", 00:22:37.151 "raid_level": "raid1", 00:22:37.151 "superblock": true, 00:22:37.151 "num_base_bdevs": 2, 00:22:37.151 "num_base_bdevs_discovered": 1, 00:22:37.151 "num_base_bdevs_operational": 1, 00:22:37.151 "base_bdevs_list": [ 00:22:37.151 { 00:22:37.151 "name": null, 00:22:37.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.151 "is_configured": false, 00:22:37.151 "data_offset": 256, 00:22:37.151 "data_size": 7936 00:22:37.151 }, 00:22:37.151 { 00:22:37.151 "name": "pt2", 00:22:37.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:37.151 "is_configured": true, 00:22:37.151 "data_offset": 256, 00:22:37.151 "data_size": 7936 00:22:37.151 } 00:22:37.151 ] 00:22:37.151 }' 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.151 14:22:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:37.718 [2024-11-27 14:22:08.147731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' bf7b6e25-eeea-40a3-9332-370f358e4023 '!=' bf7b6e25-eeea-40a3-9332-370f358e4023 ']' 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88131 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88131 ']' 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88131 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88131 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.718 killing process with pid 88131 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88131' 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88131 00:22:37.718 [2024-11-27 14:22:08.217368] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.718 14:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88131 00:22:37.718 [2024-11-27 14:22:08.217490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.718 [2024-11-27 14:22:08.217552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.718 [2024-11-27 14:22:08.217577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:37.976 [2024-11-27 14:22:08.421916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.354 14:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:22:39.354 00:22:39.354 real 0m6.817s 00:22:39.354 user 0m10.757s 00:22:39.354 sys 0m1.019s 00:22:39.354 ************************************ 00:22:39.354 END TEST raid_superblock_test_md_separate 00:22:39.354 ************************************ 00:22:39.355 14:22:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.355 14:22:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.355 14:22:09 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:22:39.355 14:22:09 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:39.355 14:22:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:39.355 14:22:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.355 14:22:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.355 ************************************ 00:22:39.355 START TEST raid_rebuild_test_sb_md_separate 00:22:39.355 ************************************ 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88465 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88465 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88465 ']' 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.355 14:22:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.355 [2024-11-27 14:22:09.652864] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:22:39.355 [2024-11-27 14:22:09.653246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:22:39.355 Zero copy mechanism will not be used. 00:22:39.355 -allocations --file-prefix=spdk_pid88465 ] 00:22:39.355 [2024-11-27 14:22:09.828980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.613 [2024-11-27 14:22:09.964818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.872 [2024-11-27 14:22:10.173862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.872 [2024-11-27 14:22:10.174106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.130 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.130 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:40.130 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:40.130 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:40.130 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.130 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 BaseBdev1_malloc 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 [2024-11-27 14:22:10.676830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:40.465 [2024-11-27 14:22:10.677061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.465 [2024-11-27 14:22:10.677139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:40.465 [2024-11-27 14:22:10.677264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.465 [2024-11-27 14:22:10.679937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.465 [2024-11-27 14:22:10.679985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:40.465 BaseBdev1 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 BaseBdev2_malloc 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 [2024-11-27 14:22:10.730091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:40.465 [2024-11-27 14:22:10.730323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.465 [2024-11-27 14:22:10.730397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:40.465 [2024-11-27 14:22:10.730519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.465 [2024-11-27 14:22:10.733290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.465 [2024-11-27 14:22:10.733339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:40.465 BaseBdev2 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 spare_malloc 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 spare_delay 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 [2024-11-27 14:22:10.798871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.465 [2024-11-27 14:22:10.799060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.465 [2024-11-27 14:22:10.799100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:40.465 [2024-11-27 14:22:10.799121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.465 [2024-11-27 14:22:10.801645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.465 [2024-11-27 14:22:10.801695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.465 spare 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 [2024-11-27 14:22:10.806888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.465 [2024-11-27 14:22:10.809391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.465 [2024-11-27 14:22:10.809630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:40.465 [2024-11-27 14:22:10.809654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:40.465 [2024-11-27 14:22:10.809748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:40.465 [2024-11-27 14:22:10.809951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:40.465 [2024-11-27 14:22:10.809969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:40.465 [2024-11-27 14:22:10.810096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.465 "name": "raid_bdev1", 00:22:40.465 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:40.465 "strip_size_kb": 0, 00:22:40.465 "state": "online", 00:22:40.465 "raid_level": "raid1", 00:22:40.465 "superblock": true, 00:22:40.465 "num_base_bdevs": 2, 00:22:40.465 "num_base_bdevs_discovered": 2, 00:22:40.465 "num_base_bdevs_operational": 2, 00:22:40.465 "base_bdevs_list": [ 00:22:40.465 { 00:22:40.465 "name": "BaseBdev1", 00:22:40.465 "uuid": "9032b9f9-c27f-5133-a382-38a9893ad5b9", 00:22:40.465 "is_configured": true, 00:22:40.465 "data_offset": 256, 00:22:40.465 "data_size": 7936 00:22:40.465 }, 00:22:40.465 { 00:22:40.465 "name": "BaseBdev2", 00:22:40.465 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:40.465 "is_configured": true, 00:22:40.465 "data_offset": 256, 00:22:40.465 "data_size": 7936 00:22:40.465 } 00:22:40.465 ] 00:22:40.465 }' 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.465 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 [2024-11-27 14:22:11.331476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:41.032 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.033 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:41.033 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.033 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:41.033 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.033 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.033 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:41.292 [2024-11-27 14:22:11.727327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:41.292 /dev/nbd0 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:41.292 1+0 records in 00:22:41.292 1+0 records out 00:22:41.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239149 s, 17.1 MB/s 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:41.292 14:22:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:42.228 7936+0 records in 00:22:42.228 7936+0 records out 00:22:42.228 32505856 bytes (33 MB, 31 MiB) copied, 0.933852 s, 34.8 MB/s 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.228 14:22:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:42.794 [2024-11-27 14:22:13.035530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:42.794 [2024-11-27 14:22:13.051737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.794 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.795 "name": "raid_bdev1", 00:22:42.795 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:42.795 "strip_size_kb": 0, 00:22:42.795 "state": "online", 00:22:42.795 "raid_level": "raid1", 00:22:42.795 "superblock": true, 00:22:42.795 "num_base_bdevs": 2, 00:22:42.795 "num_base_bdevs_discovered": 1, 00:22:42.795 "num_base_bdevs_operational": 1, 00:22:42.795 "base_bdevs_list": [ 00:22:42.795 { 00:22:42.795 "name": null, 00:22:42.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.795 "is_configured": false, 00:22:42.795 "data_offset": 0, 00:22:42.795 "data_size": 7936 00:22:42.795 }, 00:22:42.795 { 00:22:42.795 "name": "BaseBdev2", 00:22:42.795 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:42.795 "is_configured": true, 00:22:42.795 "data_offset": 256, 00:22:42.795 "data_size": 7936 00:22:42.795 } 00:22:42.795 ] 00:22:42.795 }' 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.795 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.054 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:43.054 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.054 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.054 [2024-11-27 14:22:13.564009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:43.312 [2024-11-27 14:22:13.578621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:43.312 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.312 14:22:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:43.312 [2024-11-27 14:22:13.581406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.246 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:44.247 "name": "raid_bdev1", 00:22:44.247 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:44.247 "strip_size_kb": 0, 00:22:44.247 "state": "online", 00:22:44.247 "raid_level": "raid1", 00:22:44.247 "superblock": true, 00:22:44.247 "num_base_bdevs": 2, 00:22:44.247 "num_base_bdevs_discovered": 2, 00:22:44.247 "num_base_bdevs_operational": 2, 00:22:44.247 "process": { 00:22:44.247 "type": "rebuild", 00:22:44.247 "target": "spare", 00:22:44.247 "progress": { 00:22:44.247 "blocks": 2560, 00:22:44.247 "percent": 32 00:22:44.247 } 00:22:44.247 }, 00:22:44.247 "base_bdevs_list": [ 00:22:44.247 { 00:22:44.247 "name": "spare", 00:22:44.247 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:44.247 "is_configured": true, 00:22:44.247 "data_offset": 256, 00:22:44.247 "data_size": 7936 00:22:44.247 }, 00:22:44.247 { 00:22:44.247 "name": "BaseBdev2", 00:22:44.247 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:44.247 "is_configured": true, 00:22:44.247 "data_offset": 256, 00:22:44.247 "data_size": 7936 00:22:44.247 } 00:22:44.247 ] 00:22:44.247 }' 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.247 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:44.247 [2024-11-27 14:22:14.751300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:44.505 [2024-11-27 14:22:14.790752] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:44.505 [2024-11-27 14:22:14.790869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.505 [2024-11-27 14:22:14.790895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:44.505 [2024-11-27 14:22:14.790927] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.505 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.505 "name": "raid_bdev1", 00:22:44.505 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:44.505 "strip_size_kb": 0, 00:22:44.505 "state": "online", 00:22:44.505 "raid_level": "raid1", 00:22:44.505 "superblock": true, 00:22:44.505 "num_base_bdevs": 2, 00:22:44.506 "num_base_bdevs_discovered": 1, 00:22:44.506 "num_base_bdevs_operational": 1, 00:22:44.506 "base_bdevs_list": [ 00:22:44.506 { 00:22:44.506 "name": null, 00:22:44.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.506 "is_configured": false, 00:22:44.506 "data_offset": 0, 00:22:44.506 "data_size": 7936 00:22:44.506 }, 00:22:44.506 { 00:22:44.506 "name": "BaseBdev2", 00:22:44.506 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:44.506 "is_configured": true, 00:22:44.506 "data_offset": 256, 00:22:44.506 "data_size": 7936 00:22:44.506 } 00:22:44.506 ] 00:22:44.506 }' 00:22:44.506 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.506 14:22:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.072 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.073 "name": "raid_bdev1", 00:22:45.073 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:45.073 "strip_size_kb": 0, 00:22:45.073 "state": "online", 00:22:45.073 "raid_level": "raid1", 00:22:45.073 "superblock": true, 00:22:45.073 "num_base_bdevs": 2, 00:22:45.073 "num_base_bdevs_discovered": 1, 00:22:45.073 "num_base_bdevs_operational": 1, 00:22:45.073 "base_bdevs_list": [ 00:22:45.073 { 00:22:45.073 "name": null, 00:22:45.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.073 "is_configured": false, 00:22:45.073 "data_offset": 0, 00:22:45.073 "data_size": 7936 00:22:45.073 }, 00:22:45.073 { 00:22:45.073 "name": "BaseBdev2", 00:22:45.073 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:45.073 "is_configured": true, 00:22:45.073 "data_offset": 256, 00:22:45.073 "data_size": 7936 00:22:45.073 } 00:22:45.073 ] 00:22:45.073 }' 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:45.073 [2024-11-27 14:22:15.481578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.073 [2024-11-27 14:22:15.495204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.073 14:22:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:45.073 [2024-11-27 14:22:15.497751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.008 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:46.268 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.268 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:46.268 "name": "raid_bdev1", 00:22:46.268 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:46.268 "strip_size_kb": 0, 00:22:46.268 "state": "online", 00:22:46.268 "raid_level": "raid1", 00:22:46.268 "superblock": true, 00:22:46.268 "num_base_bdevs": 2, 00:22:46.268 "num_base_bdevs_discovered": 2, 00:22:46.268 "num_base_bdevs_operational": 2, 00:22:46.268 "process": { 00:22:46.268 "type": "rebuild", 00:22:46.268 "target": "spare", 00:22:46.268 "progress": { 00:22:46.268 "blocks": 2560, 00:22:46.268 "percent": 32 00:22:46.268 } 00:22:46.268 }, 00:22:46.268 "base_bdevs_list": [ 00:22:46.268 { 00:22:46.268 "name": "spare", 00:22:46.268 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:46.268 "is_configured": true, 00:22:46.268 "data_offset": 256, 00:22:46.268 "data_size": 7936 00:22:46.268 }, 00:22:46.268 { 00:22:46.268 "name": "BaseBdev2", 00:22:46.268 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:46.268 "is_configured": true, 00:22:46.268 "data_offset": 256, 00:22:46.268 "data_size": 7936 00:22:46.268 } 00:22:46.268 ] 00:22:46.268 }' 00:22:46.268 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:46.268 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.268 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:46.268 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:46.269 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=778 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:46.269 "name": "raid_bdev1", 00:22:46.269 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:46.269 "strip_size_kb": 0, 00:22:46.269 "state": "online", 00:22:46.269 "raid_level": "raid1", 00:22:46.269 "superblock": true, 00:22:46.269 "num_base_bdevs": 2, 00:22:46.269 "num_base_bdevs_discovered": 2, 00:22:46.269 "num_base_bdevs_operational": 2, 00:22:46.269 "process": { 00:22:46.269 "type": "rebuild", 00:22:46.269 "target": "spare", 00:22:46.269 "progress": { 00:22:46.269 "blocks": 2816, 00:22:46.269 "percent": 35 00:22:46.269 } 00:22:46.269 }, 00:22:46.269 "base_bdevs_list": [ 00:22:46.269 { 00:22:46.269 "name": "spare", 00:22:46.269 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:46.269 "is_configured": true, 00:22:46.269 "data_offset": 256, 00:22:46.269 "data_size": 7936 00:22:46.269 }, 00:22:46.269 { 00:22:46.269 "name": "BaseBdev2", 00:22:46.269 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:46.269 "is_configured": true, 00:22:46.269 "data_offset": 256, 00:22:46.269 "data_size": 7936 00:22:46.269 } 00:22:46.269 ] 00:22:46.269 }' 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.269 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:46.527 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.527 14:22:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:47.463 "name": "raid_bdev1", 00:22:47.463 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:47.463 "strip_size_kb": 0, 00:22:47.463 "state": "online", 00:22:47.463 "raid_level": "raid1", 00:22:47.463 "superblock": true, 00:22:47.463 "num_base_bdevs": 2, 00:22:47.463 "num_base_bdevs_discovered": 2, 00:22:47.463 "num_base_bdevs_operational": 2, 00:22:47.463 "process": { 00:22:47.463 "type": "rebuild", 00:22:47.463 "target": "spare", 00:22:47.463 "progress": { 00:22:47.463 "blocks": 5888, 00:22:47.463 "percent": 74 00:22:47.463 } 00:22:47.463 }, 00:22:47.463 "base_bdevs_list": [ 00:22:47.463 { 00:22:47.463 "name": "spare", 00:22:47.463 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:47.463 "is_configured": true, 00:22:47.463 "data_offset": 256, 00:22:47.463 "data_size": 7936 00:22:47.463 }, 00:22:47.463 { 00:22:47.463 "name": "BaseBdev2", 00:22:47.463 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:47.463 "is_configured": true, 00:22:47.463 "data_offset": 256, 00:22:47.463 "data_size": 7936 00:22:47.463 } 00:22:47.463 ] 00:22:47.463 }' 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.463 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:47.722 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.722 14:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:48.290 [2024-11-27 14:22:18.620403] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:48.290 [2024-11-27 14:22:18.620521] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:48.290 [2024-11-27 14:22:18.620682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:48.549 14:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.549 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.549 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.549 "name": "raid_bdev1", 00:22:48.549 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:48.549 "strip_size_kb": 0, 00:22:48.549 "state": "online", 00:22:48.549 "raid_level": "raid1", 00:22:48.549 "superblock": true, 00:22:48.549 "num_base_bdevs": 2, 00:22:48.549 "num_base_bdevs_discovered": 2, 00:22:48.549 "num_base_bdevs_operational": 2, 00:22:48.549 "base_bdevs_list": [ 00:22:48.549 { 00:22:48.549 "name": "spare", 00:22:48.549 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:48.549 "is_configured": true, 00:22:48.549 "data_offset": 256, 00:22:48.549 "data_size": 7936 00:22:48.549 }, 00:22:48.549 { 00:22:48.549 "name": "BaseBdev2", 00:22:48.549 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:48.549 "is_configured": true, 00:22:48.549 "data_offset": 256, 00:22:48.549 "data_size": 7936 00:22:48.549 } 00:22:48.549 ] 00:22:48.549 }' 00:22:48.549 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.809 "name": "raid_bdev1", 00:22:48.809 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:48.809 "strip_size_kb": 0, 00:22:48.809 "state": "online", 00:22:48.809 "raid_level": "raid1", 00:22:48.809 "superblock": true, 00:22:48.809 "num_base_bdevs": 2, 00:22:48.809 "num_base_bdevs_discovered": 2, 00:22:48.809 "num_base_bdevs_operational": 2, 00:22:48.809 "base_bdevs_list": [ 00:22:48.809 { 00:22:48.809 "name": "spare", 00:22:48.809 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:48.809 "is_configured": true, 00:22:48.809 "data_offset": 256, 00:22:48.809 "data_size": 7936 00:22:48.809 }, 00:22:48.809 { 00:22:48.809 "name": "BaseBdev2", 00:22:48.809 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:48.809 "is_configured": true, 00:22:48.809 "data_offset": 256, 00:22:48.809 "data_size": 7936 00:22:48.809 } 00:22:48.809 ] 00:22:48.809 }' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:48.809 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.068 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.068 "name": "raid_bdev1", 00:22:49.068 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:49.068 "strip_size_kb": 0, 00:22:49.068 "state": "online", 00:22:49.068 "raid_level": "raid1", 00:22:49.068 "superblock": true, 00:22:49.068 "num_base_bdevs": 2, 00:22:49.068 "num_base_bdevs_discovered": 2, 00:22:49.068 "num_base_bdevs_operational": 2, 00:22:49.068 "base_bdevs_list": [ 00:22:49.068 { 00:22:49.068 "name": "spare", 00:22:49.068 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:49.068 "is_configured": true, 00:22:49.068 "data_offset": 256, 00:22:49.068 "data_size": 7936 00:22:49.068 }, 00:22:49.068 { 00:22:49.068 "name": "BaseBdev2", 00:22:49.068 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:49.068 "is_configured": true, 00:22:49.068 "data_offset": 256, 00:22:49.068 "data_size": 7936 00:22:49.068 } 00:22:49.068 ] 00:22:49.068 }' 00:22:49.068 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.068 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:49.330 [2024-11-27 14:22:19.778606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:49.330 [2024-11-27 14:22:19.778676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.330 [2024-11-27 14:22:19.778775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.330 [2024-11-27 14:22:19.778942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.330 [2024-11-27 14:22:19.778976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:49.330 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:49.331 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:49.331 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:49.331 14:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:49.898 /dev/nbd0 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.898 1+0 records in 00:22:49.898 1+0 records out 00:22:49.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254074 s, 16.1 MB/s 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:49.898 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:49.898 /dev/nbd1 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.157 1+0 records in 00:22:50.157 1+0 records out 00:22:50.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435024 s, 9.4 MB/s 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.157 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.416 14:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.674 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:50.933 [2024-11-27 14:22:21.195211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:50.933 [2024-11-27 14:22:21.195285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.933 [2024-11-27 14:22:21.195319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:50.933 [2024-11-27 14:22:21.195334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.933 [2024-11-27 14:22:21.197919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.933 [2024-11-27 14:22:21.197956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:50.933 [2024-11-27 14:22:21.198031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:50.933 [2024-11-27 14:22:21.198103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.933 [2024-11-27 14:22:21.198315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.933 spare 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:50.933 [2024-11-27 14:22:21.298420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:50.933 [2024-11-27 14:22:21.298460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:50.933 [2024-11-27 14:22:21.298587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:50.933 [2024-11-27 14:22:21.298842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:50.933 [2024-11-27 14:22:21.298879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:50.933 [2024-11-27 14:22:21.299022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.933 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.933 "name": "raid_bdev1", 00:22:50.933 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:50.933 "strip_size_kb": 0, 00:22:50.933 "state": "online", 00:22:50.933 "raid_level": "raid1", 00:22:50.933 "superblock": true, 00:22:50.933 "num_base_bdevs": 2, 00:22:50.933 "num_base_bdevs_discovered": 2, 00:22:50.933 "num_base_bdevs_operational": 2, 00:22:50.933 "base_bdevs_list": [ 00:22:50.933 { 00:22:50.933 "name": "spare", 00:22:50.933 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:50.933 "is_configured": true, 00:22:50.933 "data_offset": 256, 00:22:50.933 "data_size": 7936 00:22:50.933 }, 00:22:50.933 { 00:22:50.933 "name": "BaseBdev2", 00:22:50.934 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:50.934 "is_configured": true, 00:22:50.934 "data_offset": 256, 00:22:50.934 "data_size": 7936 00:22:50.934 } 00:22:50.934 ] 00:22:50.934 }' 00:22:50.934 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.934 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.501 "name": "raid_bdev1", 00:22:51.501 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:51.501 "strip_size_kb": 0, 00:22:51.501 "state": "online", 00:22:51.501 "raid_level": "raid1", 00:22:51.501 "superblock": true, 00:22:51.501 "num_base_bdevs": 2, 00:22:51.501 "num_base_bdevs_discovered": 2, 00:22:51.501 "num_base_bdevs_operational": 2, 00:22:51.501 "base_bdevs_list": [ 00:22:51.501 { 00:22:51.501 "name": "spare", 00:22:51.501 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:51.501 "is_configured": true, 00:22:51.501 "data_offset": 256, 00:22:51.501 "data_size": 7936 00:22:51.501 }, 00:22:51.501 { 00:22:51.501 "name": "BaseBdev2", 00:22:51.501 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:51.501 "is_configured": true, 00:22:51.501 "data_offset": 256, 00:22:51.501 "data_size": 7936 00:22:51.501 } 00:22:51.501 ] 00:22:51.501 }' 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.501 14:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.501 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.501 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:51.501 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.501 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.501 [2024-11-27 14:22:22.011606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.760 "name": "raid_bdev1", 00:22:51.760 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:51.760 "strip_size_kb": 0, 00:22:51.760 "state": "online", 00:22:51.760 "raid_level": "raid1", 00:22:51.760 "superblock": true, 00:22:51.760 "num_base_bdevs": 2, 00:22:51.760 "num_base_bdevs_discovered": 1, 00:22:51.760 "num_base_bdevs_operational": 1, 00:22:51.760 "base_bdevs_list": [ 00:22:51.760 { 00:22:51.760 "name": null, 00:22:51.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.760 "is_configured": false, 00:22:51.760 "data_offset": 0, 00:22:51.760 "data_size": 7936 00:22:51.760 }, 00:22:51.760 { 00:22:51.760 "name": "BaseBdev2", 00:22:51.760 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:51.760 "is_configured": true, 00:22:51.760 "data_offset": 256, 00:22:51.760 "data_size": 7936 00:22:51.760 } 00:22:51.760 ] 00:22:51.760 }' 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.760 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.331 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:52.331 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.331 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.331 [2024-11-27 14:22:22.539864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.331 [2024-11-27 14:22:22.540112] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:52.331 [2024-11-27 14:22:22.540151] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:52.332 [2024-11-27 14:22:22.540194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.332 [2024-11-27 14:22:22.553530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:52.332 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.332 14:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:52.332 [2024-11-27 14:22:22.556289] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.275 "name": "raid_bdev1", 00:22:53.275 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:53.275 "strip_size_kb": 0, 00:22:53.275 "state": "online", 00:22:53.275 "raid_level": "raid1", 00:22:53.275 "superblock": true, 00:22:53.275 "num_base_bdevs": 2, 00:22:53.275 "num_base_bdevs_discovered": 2, 00:22:53.275 "num_base_bdevs_operational": 2, 00:22:53.275 "process": { 00:22:53.275 "type": "rebuild", 00:22:53.275 "target": "spare", 00:22:53.275 "progress": { 00:22:53.275 "blocks": 2560, 00:22:53.275 "percent": 32 00:22:53.275 } 00:22:53.275 }, 00:22:53.275 "base_bdevs_list": [ 00:22:53.275 { 00:22:53.275 "name": "spare", 00:22:53.275 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:53.275 "is_configured": true, 00:22:53.275 "data_offset": 256, 00:22:53.275 "data_size": 7936 00:22:53.275 }, 00:22:53.275 { 00:22:53.275 "name": "BaseBdev2", 00:22:53.275 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:53.275 "is_configured": true, 00:22:53.275 "data_offset": 256, 00:22:53.275 "data_size": 7936 00:22:53.275 } 00:22:53.275 ] 00:22:53.275 }' 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.275 [2024-11-27 14:22:23.725758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.275 [2024-11-27 14:22:23.765486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:53.275 [2024-11-27 14:22:23.765571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.275 [2024-11-27 14:22:23.765593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.275 [2024-11-27 14:22:23.765633] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:53.275 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.533 "name": "raid_bdev1", 00:22:53.533 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:53.533 "strip_size_kb": 0, 00:22:53.533 "state": "online", 00:22:53.533 "raid_level": "raid1", 00:22:53.533 "superblock": true, 00:22:53.533 "num_base_bdevs": 2, 00:22:53.533 "num_base_bdevs_discovered": 1, 00:22:53.533 "num_base_bdevs_operational": 1, 00:22:53.533 "base_bdevs_list": [ 00:22:53.533 { 00:22:53.533 "name": null, 00:22:53.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.533 "is_configured": false, 00:22:53.533 "data_offset": 0, 00:22:53.533 "data_size": 7936 00:22:53.533 }, 00:22:53.533 { 00:22:53.533 "name": "BaseBdev2", 00:22:53.533 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:53.533 "is_configured": true, 00:22:53.533 "data_offset": 256, 00:22:53.533 "data_size": 7936 00:22:53.533 } 00:22:53.533 ] 00:22:53.533 }' 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.533 14:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.792 14:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:53.792 14:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.792 14:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.792 [2024-11-27 14:22:24.288843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.792 [2024-11-27 14:22:24.288946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.792 [2024-11-27 14:22:24.288981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:53.792 [2024-11-27 14:22:24.288999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.792 [2024-11-27 14:22:24.289357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.792 [2024-11-27 14:22:24.289396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.792 [2024-11-27 14:22:24.289471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:53.792 [2024-11-27 14:22:24.289493] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:53.792 [2024-11-27 14:22:24.289507] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:53.792 [2024-11-27 14:22:24.289537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.792 [2024-11-27 14:22:24.303065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:53.792 spare 00:22:54.051 14:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.051 14:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:54.051 [2024-11-27 14:22:24.305538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:54.986 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.987 "name": "raid_bdev1", 00:22:54.987 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:54.987 "strip_size_kb": 0, 00:22:54.987 "state": "online", 00:22:54.987 "raid_level": "raid1", 00:22:54.987 "superblock": true, 00:22:54.987 "num_base_bdevs": 2, 00:22:54.987 "num_base_bdevs_discovered": 2, 00:22:54.987 "num_base_bdevs_operational": 2, 00:22:54.987 "process": { 00:22:54.987 "type": "rebuild", 00:22:54.987 "target": "spare", 00:22:54.987 "progress": { 00:22:54.987 "blocks": 2560, 00:22:54.987 "percent": 32 00:22:54.987 } 00:22:54.987 }, 00:22:54.987 "base_bdevs_list": [ 00:22:54.987 { 00:22:54.987 "name": "spare", 00:22:54.987 "uuid": "5f2e7566-0f0c-541f-8c27-dbfb87f6a0f7", 00:22:54.987 "is_configured": true, 00:22:54.987 "data_offset": 256, 00:22:54.987 "data_size": 7936 00:22:54.987 }, 00:22:54.987 { 00:22:54.987 "name": "BaseBdev2", 00:22:54.987 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:54.987 "is_configured": true, 00:22:54.987 "data_offset": 256, 00:22:54.987 "data_size": 7936 00:22:54.987 } 00:22:54.987 ] 00:22:54.987 }' 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.987 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.987 [2024-11-27 14:22:25.475691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.246 [2024-11-27 14:22:25.514563] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:55.246 [2024-11-27 14:22:25.514642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.246 [2024-11-27 14:22:25.514687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.246 [2024-11-27 14:22:25.514709] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.246 "name": "raid_bdev1", 00:22:55.246 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:55.246 "strip_size_kb": 0, 00:22:55.246 "state": "online", 00:22:55.246 "raid_level": "raid1", 00:22:55.246 "superblock": true, 00:22:55.246 "num_base_bdevs": 2, 00:22:55.246 "num_base_bdevs_discovered": 1, 00:22:55.246 "num_base_bdevs_operational": 1, 00:22:55.246 "base_bdevs_list": [ 00:22:55.246 { 00:22:55.246 "name": null, 00:22:55.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.246 "is_configured": false, 00:22:55.246 "data_offset": 0, 00:22:55.246 "data_size": 7936 00:22:55.246 }, 00:22:55.246 { 00:22:55.246 "name": "BaseBdev2", 00:22:55.246 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:55.246 "is_configured": true, 00:22:55.246 "data_offset": 256, 00:22:55.246 "data_size": 7936 00:22:55.246 } 00:22:55.246 ] 00:22:55.246 }' 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.246 14:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.814 "name": "raid_bdev1", 00:22:55.814 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:55.814 "strip_size_kb": 0, 00:22:55.814 "state": "online", 00:22:55.814 "raid_level": "raid1", 00:22:55.814 "superblock": true, 00:22:55.814 "num_base_bdevs": 2, 00:22:55.814 "num_base_bdevs_discovered": 1, 00:22:55.814 "num_base_bdevs_operational": 1, 00:22:55.814 "base_bdevs_list": [ 00:22:55.814 { 00:22:55.814 "name": null, 00:22:55.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.814 "is_configured": false, 00:22:55.814 "data_offset": 0, 00:22:55.814 "data_size": 7936 00:22:55.814 }, 00:22:55.814 { 00:22:55.814 "name": "BaseBdev2", 00:22:55.814 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:55.814 "is_configured": true, 00:22:55.814 "data_offset": 256, 00:22:55.814 "data_size": 7936 00:22:55.814 } 00:22:55.814 ] 00:22:55.814 }' 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 [2024-11-27 14:22:26.227679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.814 [2024-11-27 14:22:26.227756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.814 [2024-11-27 14:22:26.227788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:55.814 [2024-11-27 14:22:26.227802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.814 [2024-11-27 14:22:26.228169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.814 [2024-11-27 14:22:26.228218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.814 [2024-11-27 14:22:26.228286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:55.814 [2024-11-27 14:22:26.228308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:55.814 [2024-11-27 14:22:26.228321] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:55.814 [2024-11-27 14:22:26.228334] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:55.814 BaseBdev1 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.814 14:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.752 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.011 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.011 "name": "raid_bdev1", 00:22:57.011 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:57.011 "strip_size_kb": 0, 00:22:57.011 "state": "online", 00:22:57.011 "raid_level": "raid1", 00:22:57.011 "superblock": true, 00:22:57.011 "num_base_bdevs": 2, 00:22:57.011 "num_base_bdevs_discovered": 1, 00:22:57.011 "num_base_bdevs_operational": 1, 00:22:57.011 "base_bdevs_list": [ 00:22:57.011 { 00:22:57.011 "name": null, 00:22:57.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.011 "is_configured": false, 00:22:57.011 "data_offset": 0, 00:22:57.011 "data_size": 7936 00:22:57.011 }, 00:22:57.011 { 00:22:57.011 "name": "BaseBdev2", 00:22:57.011 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:57.011 "is_configured": true, 00:22:57.011 "data_offset": 256, 00:22:57.011 "data_size": 7936 00:22:57.011 } 00:22:57.011 ] 00:22:57.011 }' 00:22:57.011 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.011 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.269 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.528 "name": "raid_bdev1", 00:22:57.528 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:57.528 "strip_size_kb": 0, 00:22:57.528 "state": "online", 00:22:57.528 "raid_level": "raid1", 00:22:57.528 "superblock": true, 00:22:57.528 "num_base_bdevs": 2, 00:22:57.528 "num_base_bdevs_discovered": 1, 00:22:57.528 "num_base_bdevs_operational": 1, 00:22:57.528 "base_bdevs_list": [ 00:22:57.528 { 00:22:57.528 "name": null, 00:22:57.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.528 "is_configured": false, 00:22:57.528 "data_offset": 0, 00:22:57.528 "data_size": 7936 00:22:57.528 }, 00:22:57.528 { 00:22:57.528 "name": "BaseBdev2", 00:22:57.528 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:57.528 "is_configured": true, 00:22:57.528 "data_offset": 256, 00:22:57.528 "data_size": 7936 00:22:57.528 } 00:22:57.528 ] 00:22:57.528 }' 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.528 [2024-11-27 14:22:27.904487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.528 [2024-11-27 14:22:27.904726] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:57.528 [2024-11-27 14:22:27.904754] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:57.528 request: 00:22:57.528 { 00:22:57.528 "base_bdev": "BaseBdev1", 00:22:57.528 "raid_bdev": "raid_bdev1", 00:22:57.528 "method": "bdev_raid_add_base_bdev", 00:22:57.528 "req_id": 1 00:22:57.528 } 00:22:57.528 Got JSON-RPC error response 00:22:57.528 response: 00:22:57.528 { 00:22:57.528 "code": -22, 00:22:57.528 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:57.528 } 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.528 14:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.464 "name": "raid_bdev1", 00:22:58.464 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:58.464 "strip_size_kb": 0, 00:22:58.464 "state": "online", 00:22:58.464 "raid_level": "raid1", 00:22:58.464 "superblock": true, 00:22:58.464 "num_base_bdevs": 2, 00:22:58.464 "num_base_bdevs_discovered": 1, 00:22:58.464 "num_base_bdevs_operational": 1, 00:22:58.464 "base_bdevs_list": [ 00:22:58.464 { 00:22:58.464 "name": null, 00:22:58.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.464 "is_configured": false, 00:22:58.464 "data_offset": 0, 00:22:58.464 "data_size": 7936 00:22:58.464 }, 00:22:58.464 { 00:22:58.464 "name": "BaseBdev2", 00:22:58.464 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:58.464 "is_configured": true, 00:22:58.464 "data_offset": 256, 00:22:58.464 "data_size": 7936 00:22:58.464 } 00:22:58.464 ] 00:22:58.464 }' 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.464 14:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.033 "name": "raid_bdev1", 00:22:59.033 "uuid": "e944f1b4-20e8-4253-acd2-842707d5dbc3", 00:22:59.033 "strip_size_kb": 0, 00:22:59.033 "state": "online", 00:22:59.033 "raid_level": "raid1", 00:22:59.033 "superblock": true, 00:22:59.033 "num_base_bdevs": 2, 00:22:59.033 "num_base_bdevs_discovered": 1, 00:22:59.033 "num_base_bdevs_operational": 1, 00:22:59.033 "base_bdevs_list": [ 00:22:59.033 { 00:22:59.033 "name": null, 00:22:59.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.033 "is_configured": false, 00:22:59.033 "data_offset": 0, 00:22:59.033 "data_size": 7936 00:22:59.033 }, 00:22:59.033 { 00:22:59.033 "name": "BaseBdev2", 00:22:59.033 "uuid": "525c8588-28c5-538f-9f70-ad282cec4ade", 00:22:59.033 "is_configured": true, 00:22:59.033 "data_offset": 256, 00:22:59.033 "data_size": 7936 00:22:59.033 } 00:22:59.033 ] 00:22:59.033 }' 00:22:59.033 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88465 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88465 ']' 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88465 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88465 00:22:59.302 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.303 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.303 killing process with pid 88465 00:22:59.303 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88465' 00:22:59.303 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88465 00:22:59.303 Received shutdown signal, test time was about 60.000000 seconds 00:22:59.303 00:22:59.303 Latency(us) 00:22:59.303 [2024-11-27T14:22:29.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.303 [2024-11-27T14:22:29.816Z] =================================================================================================================== 00:22:59.303 [2024-11-27T14:22:29.816Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.303 [2024-11-27 14:22:29.639412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:59.303 14:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88465 00:22:59.303 [2024-11-27 14:22:29.639559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.303 [2024-11-27 14:22:29.639626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.303 [2024-11-27 14:22:29.639645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:59.566 [2024-11-27 14:22:29.911388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:00.501 14:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:23:00.501 00:23:00.501 real 0m21.382s 00:23:00.501 user 0m29.037s 00:23:00.501 sys 0m2.446s 00:23:00.501 14:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.501 14:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.501 ************************************ 00:23:00.501 END TEST raid_rebuild_test_sb_md_separate 00:23:00.501 ************************************ 00:23:00.501 14:22:30 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:23:00.501 14:22:30 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:00.501 14:22:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:00.501 14:22:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.501 14:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:00.501 ************************************ 00:23:00.501 START TEST raid_state_function_test_sb_md_interleaved 00:23:00.501 ************************************ 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.501 14:22:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89168 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89168' 00:23:00.501 Process raid pid: 89168 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89168 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89168 ']' 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.501 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.502 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.502 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.502 14:22:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:00.761 [2024-11-27 14:22:31.109787] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:00.761 [2024-11-27 14:22:31.110005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.020 [2024-11-27 14:22:31.297897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.020 [2024-11-27 14:22:31.467668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.279 [2024-11-27 14:22:31.679083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:01.279 [2024-11-27 14:22:31.679160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.847 [2024-11-27 14:22:32.122500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.847 [2024-11-27 14:22:32.122772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.847 [2024-11-27 14:22:32.122943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.847 [2024-11-27 14:22:32.123015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.847 "name": "Existed_Raid", 00:23:01.847 "uuid": "09a386b5-6083-4312-97a6-906eb7d9e293", 00:23:01.847 "strip_size_kb": 0, 00:23:01.847 "state": "configuring", 00:23:01.847 "raid_level": "raid1", 00:23:01.847 "superblock": true, 00:23:01.847 "num_base_bdevs": 2, 00:23:01.847 "num_base_bdevs_discovered": 0, 00:23:01.847 "num_base_bdevs_operational": 2, 00:23:01.847 "base_bdevs_list": [ 00:23:01.847 { 00:23:01.847 "name": "BaseBdev1", 00:23:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.847 "is_configured": false, 00:23:01.847 "data_offset": 0, 00:23:01.847 "data_size": 0 00:23:01.847 }, 00:23:01.847 { 00:23:01.847 "name": "BaseBdev2", 00:23:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.847 "is_configured": false, 00:23:01.847 "data_offset": 0, 00:23:01.847 "data_size": 0 00:23:01.847 } 00:23:01.847 ] 00:23:01.847 }' 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.847 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.105 [2024-11-27 14:22:32.598555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.105 [2024-11-27 14:22:32.598597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.105 [2024-11-27 14:22:32.606556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:02.105 [2024-11-27 14:22:32.606619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:02.105 [2024-11-27 14:22:32.606650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.105 [2024-11-27 14:22:32.606668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.105 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:02.106 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.106 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.364 [2024-11-27 14:22:32.653187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.364 BaseBdev1 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.364 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.364 [ 00:23:02.364 { 00:23:02.364 "name": "BaseBdev1", 00:23:02.364 "aliases": [ 00:23:02.365 "82540fa7-05ee-442f-9d58-b2bcc6390343" 00:23:02.365 ], 00:23:02.365 "product_name": "Malloc disk", 00:23:02.365 "block_size": 4128, 00:23:02.365 "num_blocks": 8192, 00:23:02.365 "uuid": "82540fa7-05ee-442f-9d58-b2bcc6390343", 00:23:02.365 "md_size": 32, 00:23:02.365 "md_interleave": true, 00:23:02.365 "dif_type": 0, 00:23:02.365 "assigned_rate_limits": { 00:23:02.365 "rw_ios_per_sec": 0, 00:23:02.365 "rw_mbytes_per_sec": 0, 00:23:02.365 "r_mbytes_per_sec": 0, 00:23:02.365 "w_mbytes_per_sec": 0 00:23:02.365 }, 00:23:02.365 "claimed": true, 00:23:02.365 "claim_type": "exclusive_write", 00:23:02.365 "zoned": false, 00:23:02.365 "supported_io_types": { 00:23:02.365 "read": true, 00:23:02.365 "write": true, 00:23:02.365 "unmap": true, 00:23:02.365 "flush": true, 00:23:02.365 "reset": true, 00:23:02.365 "nvme_admin": false, 00:23:02.365 "nvme_io": false, 00:23:02.365 "nvme_io_md": false, 00:23:02.365 "write_zeroes": true, 00:23:02.365 "zcopy": true, 00:23:02.365 "get_zone_info": false, 00:23:02.365 "zone_management": false, 00:23:02.365 "zone_append": false, 00:23:02.365 "compare": false, 00:23:02.365 "compare_and_write": false, 00:23:02.365 "abort": true, 00:23:02.365 "seek_hole": false, 00:23:02.365 "seek_data": false, 00:23:02.365 "copy": true, 00:23:02.365 "nvme_iov_md": false 00:23:02.365 }, 00:23:02.365 "memory_domains": [ 00:23:02.365 { 00:23:02.365 "dma_device_id": "system", 00:23:02.365 "dma_device_type": 1 00:23:02.365 }, 00:23:02.365 { 00:23:02.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.365 "dma_device_type": 2 00:23:02.365 } 00:23:02.365 ], 00:23:02.365 "driver_specific": {} 00:23:02.365 } 00:23:02.365 ] 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.365 "name": "Existed_Raid", 00:23:02.365 "uuid": "ad09c78d-a76c-451a-afee-4d7281eb90b1", 00:23:02.365 "strip_size_kb": 0, 00:23:02.365 "state": "configuring", 00:23:02.365 "raid_level": "raid1", 00:23:02.365 "superblock": true, 00:23:02.365 "num_base_bdevs": 2, 00:23:02.365 "num_base_bdevs_discovered": 1, 00:23:02.365 "num_base_bdevs_operational": 2, 00:23:02.365 "base_bdevs_list": [ 00:23:02.365 { 00:23:02.365 "name": "BaseBdev1", 00:23:02.365 "uuid": "82540fa7-05ee-442f-9d58-b2bcc6390343", 00:23:02.365 "is_configured": true, 00:23:02.365 "data_offset": 256, 00:23:02.365 "data_size": 7936 00:23:02.365 }, 00:23:02.365 { 00:23:02.365 "name": "BaseBdev2", 00:23:02.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.365 "is_configured": false, 00:23:02.365 "data_offset": 0, 00:23:02.365 "data_size": 0 00:23:02.365 } 00:23:02.365 ] 00:23:02.365 }' 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.365 14:22:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.932 [2024-11-27 14:22:33.197542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.932 [2024-11-27 14:22:33.197614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.932 [2024-11-27 14:22:33.205557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.932 [2024-11-27 14:22:33.208080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.932 [2024-11-27 14:22:33.208153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.932 "name": "Existed_Raid", 00:23:02.932 "uuid": "6a725d60-491d-4450-9b4e-636a08adbfe5", 00:23:02.932 "strip_size_kb": 0, 00:23:02.932 "state": "configuring", 00:23:02.932 "raid_level": "raid1", 00:23:02.932 "superblock": true, 00:23:02.932 "num_base_bdevs": 2, 00:23:02.932 "num_base_bdevs_discovered": 1, 00:23:02.932 "num_base_bdevs_operational": 2, 00:23:02.932 "base_bdevs_list": [ 00:23:02.932 { 00:23:02.932 "name": "BaseBdev1", 00:23:02.932 "uuid": "82540fa7-05ee-442f-9d58-b2bcc6390343", 00:23:02.932 "is_configured": true, 00:23:02.932 "data_offset": 256, 00:23:02.932 "data_size": 7936 00:23:02.932 }, 00:23:02.932 { 00:23:02.932 "name": "BaseBdev2", 00:23:02.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.932 "is_configured": false, 00:23:02.932 "data_offset": 0, 00:23:02.932 "data_size": 0 00:23:02.932 } 00:23:02.932 ] 00:23:02.932 }' 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.932 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.501 [2024-11-27 14:22:33.793325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.501 [2024-11-27 14:22:33.793652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:03.501 [2024-11-27 14:22:33.793681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:03.501 [2024-11-27 14:22:33.793783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:03.501 [2024-11-27 14:22:33.793915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:03.501 [2024-11-27 14:22:33.793947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:23:03.501 id_bdev 0x617000007e80 00:23:03.501 [2024-11-27 14:22:33.794042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.501 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.501 [ 00:23:03.501 { 00:23:03.501 "name": "BaseBdev2", 00:23:03.501 "aliases": [ 00:23:03.501 "fdf4a26c-26c5-46ad-8c8d-56a2ae12a5ee" 00:23:03.501 ], 00:23:03.501 "product_name": "Malloc disk", 00:23:03.501 "block_size": 4128, 00:23:03.501 "num_blocks": 8192, 00:23:03.501 "uuid": "fdf4a26c-26c5-46ad-8c8d-56a2ae12a5ee", 00:23:03.501 "md_size": 32, 00:23:03.501 "md_interleave": true, 00:23:03.501 "dif_type": 0, 00:23:03.501 "assigned_rate_limits": { 00:23:03.501 "rw_ios_per_sec": 0, 00:23:03.501 "rw_mbytes_per_sec": 0, 00:23:03.501 "r_mbytes_per_sec": 0, 00:23:03.501 "w_mbytes_per_sec": 0 00:23:03.501 }, 00:23:03.501 "claimed": true, 00:23:03.501 "claim_type": "exclusive_write", 00:23:03.501 "zoned": false, 00:23:03.501 "supported_io_types": { 00:23:03.501 "read": true, 00:23:03.501 "write": true, 00:23:03.501 "unmap": true, 00:23:03.501 "flush": true, 00:23:03.501 "reset": true, 00:23:03.501 "nvme_admin": false, 00:23:03.501 "nvme_io": false, 00:23:03.501 "nvme_io_md": false, 00:23:03.501 "write_zeroes": true, 00:23:03.501 "zcopy": true, 00:23:03.501 "get_zone_info": false, 00:23:03.501 "zone_management": false, 00:23:03.501 "zone_append": false, 00:23:03.501 "compare": false, 00:23:03.501 "compare_and_write": false, 00:23:03.501 "abort": true, 00:23:03.501 "seek_hole": false, 00:23:03.501 "seek_data": false, 00:23:03.501 "copy": true, 00:23:03.501 "nvme_iov_md": false 00:23:03.501 }, 00:23:03.501 "memory_domains": [ 00:23:03.501 { 00:23:03.501 "dma_device_id": "system", 00:23:03.501 "dma_device_type": 1 00:23:03.501 }, 00:23:03.501 { 00:23:03.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.501 "dma_device_type": 2 00:23:03.502 } 00:23:03.502 ], 00:23:03.502 "driver_specific": {} 00:23:03.502 } 00:23:03.502 ] 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.502 "name": "Existed_Raid", 00:23:03.502 "uuid": "6a725d60-491d-4450-9b4e-636a08adbfe5", 00:23:03.502 "strip_size_kb": 0, 00:23:03.502 "state": "online", 00:23:03.502 "raid_level": "raid1", 00:23:03.502 "superblock": true, 00:23:03.502 "num_base_bdevs": 2, 00:23:03.502 "num_base_bdevs_discovered": 2, 00:23:03.502 "num_base_bdevs_operational": 2, 00:23:03.502 "base_bdevs_list": [ 00:23:03.502 { 00:23:03.502 "name": "BaseBdev1", 00:23:03.502 "uuid": "82540fa7-05ee-442f-9d58-b2bcc6390343", 00:23:03.502 "is_configured": true, 00:23:03.502 "data_offset": 256, 00:23:03.502 "data_size": 7936 00:23:03.502 }, 00:23:03.502 { 00:23:03.502 "name": "BaseBdev2", 00:23:03.502 "uuid": "fdf4a26c-26c5-46ad-8c8d-56a2ae12a5ee", 00:23:03.502 "is_configured": true, 00:23:03.502 "data_offset": 256, 00:23:03.502 "data_size": 7936 00:23:03.502 } 00:23:03.502 ] 00:23:03.502 }' 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.502 14:22:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.071 [2024-11-27 14:22:34.329990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:04.071 "name": "Existed_Raid", 00:23:04.071 "aliases": [ 00:23:04.071 "6a725d60-491d-4450-9b4e-636a08adbfe5" 00:23:04.071 ], 00:23:04.071 "product_name": "Raid Volume", 00:23:04.071 "block_size": 4128, 00:23:04.071 "num_blocks": 7936, 00:23:04.071 "uuid": "6a725d60-491d-4450-9b4e-636a08adbfe5", 00:23:04.071 "md_size": 32, 00:23:04.071 "md_interleave": true, 00:23:04.071 "dif_type": 0, 00:23:04.071 "assigned_rate_limits": { 00:23:04.071 "rw_ios_per_sec": 0, 00:23:04.071 "rw_mbytes_per_sec": 0, 00:23:04.071 "r_mbytes_per_sec": 0, 00:23:04.071 "w_mbytes_per_sec": 0 00:23:04.071 }, 00:23:04.071 "claimed": false, 00:23:04.071 "zoned": false, 00:23:04.071 "supported_io_types": { 00:23:04.071 "read": true, 00:23:04.071 "write": true, 00:23:04.071 "unmap": false, 00:23:04.071 "flush": false, 00:23:04.071 "reset": true, 00:23:04.071 "nvme_admin": false, 00:23:04.071 "nvme_io": false, 00:23:04.071 "nvme_io_md": false, 00:23:04.071 "write_zeroes": true, 00:23:04.071 "zcopy": false, 00:23:04.071 "get_zone_info": false, 00:23:04.071 "zone_management": false, 00:23:04.071 "zone_append": false, 00:23:04.071 "compare": false, 00:23:04.071 "compare_and_write": false, 00:23:04.071 "abort": false, 00:23:04.071 "seek_hole": false, 00:23:04.071 "seek_data": false, 00:23:04.071 "copy": false, 00:23:04.071 "nvme_iov_md": false 00:23:04.071 }, 00:23:04.071 "memory_domains": [ 00:23:04.071 { 00:23:04.071 "dma_device_id": "system", 00:23:04.071 "dma_device_type": 1 00:23:04.071 }, 00:23:04.071 { 00:23:04.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.071 "dma_device_type": 2 00:23:04.071 }, 00:23:04.071 { 00:23:04.071 "dma_device_id": "system", 00:23:04.071 "dma_device_type": 1 00:23:04.071 }, 00:23:04.071 { 00:23:04.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.071 "dma_device_type": 2 00:23:04.071 } 00:23:04.071 ], 00:23:04.071 "driver_specific": { 00:23:04.071 "raid": { 00:23:04.071 "uuid": "6a725d60-491d-4450-9b4e-636a08adbfe5", 00:23:04.071 "strip_size_kb": 0, 00:23:04.071 "state": "online", 00:23:04.071 "raid_level": "raid1", 00:23:04.071 "superblock": true, 00:23:04.071 "num_base_bdevs": 2, 00:23:04.071 "num_base_bdevs_discovered": 2, 00:23:04.071 "num_base_bdevs_operational": 2, 00:23:04.071 "base_bdevs_list": [ 00:23:04.071 { 00:23:04.071 "name": "BaseBdev1", 00:23:04.071 "uuid": "82540fa7-05ee-442f-9d58-b2bcc6390343", 00:23:04.071 "is_configured": true, 00:23:04.071 "data_offset": 256, 00:23:04.071 "data_size": 7936 00:23:04.071 }, 00:23:04.071 { 00:23:04.071 "name": "BaseBdev2", 00:23:04.071 "uuid": "fdf4a26c-26c5-46ad-8c8d-56a2ae12a5ee", 00:23:04.071 "is_configured": true, 00:23:04.071 "data_offset": 256, 00:23:04.071 "data_size": 7936 00:23:04.071 } 00:23:04.071 ] 00:23:04.071 } 00:23:04.071 } 00:23:04.071 }' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:04.071 BaseBdev2' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.071 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.072 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.330 [2024-11-27 14:22:34.589720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.330 "name": "Existed_Raid", 00:23:04.330 "uuid": "6a725d60-491d-4450-9b4e-636a08adbfe5", 00:23:04.330 "strip_size_kb": 0, 00:23:04.330 "state": "online", 00:23:04.330 "raid_level": "raid1", 00:23:04.330 "superblock": true, 00:23:04.330 "num_base_bdevs": 2, 00:23:04.330 "num_base_bdevs_discovered": 1, 00:23:04.330 "num_base_bdevs_operational": 1, 00:23:04.330 "base_bdevs_list": [ 00:23:04.330 { 00:23:04.330 "name": null, 00:23:04.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.330 "is_configured": false, 00:23:04.330 "data_offset": 0, 00:23:04.330 "data_size": 7936 00:23:04.330 }, 00:23:04.330 { 00:23:04.330 "name": "BaseBdev2", 00:23:04.330 "uuid": "fdf4a26c-26c5-46ad-8c8d-56a2ae12a5ee", 00:23:04.330 "is_configured": true, 00:23:04.330 "data_offset": 256, 00:23:04.330 "data_size": 7936 00:23:04.330 } 00:23:04.330 ] 00:23:04.330 }' 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.330 14:22:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 [2024-11-27 14:22:35.224469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:04.898 [2024-11-27 14:22:35.224620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.898 [2024-11-27 14:22:35.307075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.898 [2024-11-27 14:22:35.307147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.898 [2024-11-27 14:22:35.307169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89168 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89168 ']' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89168 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89168 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.898 killing process with pid 89168 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89168' 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89168 00:23:04.898 [2024-11-27 14:22:35.394412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:04.898 14:22:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89168 00:23:05.158 [2024-11-27 14:22:35.409889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:06.092 14:22:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:23:06.092 00:23:06.092 real 0m5.525s 00:23:06.092 user 0m8.325s 00:23:06.092 sys 0m0.785s 00:23:06.092 14:22:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.092 14:22:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.092 ************************************ 00:23:06.092 END TEST raid_state_function_test_sb_md_interleaved 00:23:06.092 ************************************ 00:23:06.092 14:22:36 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:06.092 14:22:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:06.092 14:22:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.092 14:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:06.092 ************************************ 00:23:06.092 START TEST raid_superblock_test_md_interleaved 00:23:06.092 ************************************ 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89416 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89416 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89416 ']' 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.092 14:22:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.406 [2024-11-27 14:22:36.691286] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:06.406 [2024-11-27 14:22:36.691473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89416 ] 00:23:06.406 [2024-11-27 14:22:36.874997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.687 [2024-11-27 14:22:37.006127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.945 [2024-11-27 14:22:37.206133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.945 [2024-11-27 14:22:37.206242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.513 malloc1 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.513 [2024-11-27 14:22:37.768890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:07.513 [2024-11-27 14:22:37.768970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.513 [2024-11-27 14:22:37.769002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:07.513 [2024-11-27 14:22:37.769016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.513 [2024-11-27 14:22:37.771634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.513 [2024-11-27 14:22:37.771693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:07.513 pt1 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.513 malloc2 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.513 [2024-11-27 14:22:37.820539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:07.513 [2024-11-27 14:22:37.820623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.513 [2024-11-27 14:22:37.820670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:07.513 [2024-11-27 14:22:37.820683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.513 [2024-11-27 14:22:37.823428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.513 [2024-11-27 14:22:37.823487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:07.513 pt2 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.513 [2024-11-27 14:22:37.828595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:07.513 [2024-11-27 14:22:37.831370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:07.513 [2024-11-27 14:22:37.831645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:07.513 [2024-11-27 14:22:37.831717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:07.513 [2024-11-27 14:22:37.831810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:07.513 [2024-11-27 14:22:37.831978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:07.513 [2024-11-27 14:22:37.832000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:07.513 [2024-11-27 14:22:37.832094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.513 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.514 "name": "raid_bdev1", 00:23:07.514 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:07.514 "strip_size_kb": 0, 00:23:07.514 "state": "online", 00:23:07.514 "raid_level": "raid1", 00:23:07.514 "superblock": true, 00:23:07.514 "num_base_bdevs": 2, 00:23:07.514 "num_base_bdevs_discovered": 2, 00:23:07.514 "num_base_bdevs_operational": 2, 00:23:07.514 "base_bdevs_list": [ 00:23:07.514 { 00:23:07.514 "name": "pt1", 00:23:07.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:07.514 "is_configured": true, 00:23:07.514 "data_offset": 256, 00:23:07.514 "data_size": 7936 00:23:07.514 }, 00:23:07.514 { 00:23:07.514 "name": "pt2", 00:23:07.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.514 "is_configured": true, 00:23:07.514 "data_offset": 256, 00:23:07.514 "data_size": 7936 00:23:07.514 } 00:23:07.514 ] 00:23:07.514 }' 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.514 14:22:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:08.081 [2024-11-27 14:22:38.365176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:08.081 "name": "raid_bdev1", 00:23:08.081 "aliases": [ 00:23:08.081 "0efbac89-63b8-4ec2-9f8e-57e59e391fbd" 00:23:08.081 ], 00:23:08.081 "product_name": "Raid Volume", 00:23:08.081 "block_size": 4128, 00:23:08.081 "num_blocks": 7936, 00:23:08.081 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:08.081 "md_size": 32, 00:23:08.081 "md_interleave": true, 00:23:08.081 "dif_type": 0, 00:23:08.081 "assigned_rate_limits": { 00:23:08.081 "rw_ios_per_sec": 0, 00:23:08.081 "rw_mbytes_per_sec": 0, 00:23:08.081 "r_mbytes_per_sec": 0, 00:23:08.081 "w_mbytes_per_sec": 0 00:23:08.081 }, 00:23:08.081 "claimed": false, 00:23:08.081 "zoned": false, 00:23:08.081 "supported_io_types": { 00:23:08.081 "read": true, 00:23:08.081 "write": true, 00:23:08.081 "unmap": false, 00:23:08.081 "flush": false, 00:23:08.081 "reset": true, 00:23:08.081 "nvme_admin": false, 00:23:08.081 "nvme_io": false, 00:23:08.081 "nvme_io_md": false, 00:23:08.081 "write_zeroes": true, 00:23:08.081 "zcopy": false, 00:23:08.081 "get_zone_info": false, 00:23:08.081 "zone_management": false, 00:23:08.081 "zone_append": false, 00:23:08.081 "compare": false, 00:23:08.081 "compare_and_write": false, 00:23:08.081 "abort": false, 00:23:08.081 "seek_hole": false, 00:23:08.081 "seek_data": false, 00:23:08.081 "copy": false, 00:23:08.081 "nvme_iov_md": false 00:23:08.081 }, 00:23:08.081 "memory_domains": [ 00:23:08.081 { 00:23:08.081 "dma_device_id": "system", 00:23:08.081 "dma_device_type": 1 00:23:08.081 }, 00:23:08.081 { 00:23:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.081 "dma_device_type": 2 00:23:08.081 }, 00:23:08.081 { 00:23:08.081 "dma_device_id": "system", 00:23:08.081 "dma_device_type": 1 00:23:08.081 }, 00:23:08.081 { 00:23:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.081 "dma_device_type": 2 00:23:08.081 } 00:23:08.081 ], 00:23:08.081 "driver_specific": { 00:23:08.081 "raid": { 00:23:08.081 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:08.081 "strip_size_kb": 0, 00:23:08.081 "state": "online", 00:23:08.081 "raid_level": "raid1", 00:23:08.081 "superblock": true, 00:23:08.081 "num_base_bdevs": 2, 00:23:08.081 "num_base_bdevs_discovered": 2, 00:23:08.081 "num_base_bdevs_operational": 2, 00:23:08.081 "base_bdevs_list": [ 00:23:08.081 { 00:23:08.081 "name": "pt1", 00:23:08.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.081 "is_configured": true, 00:23:08.081 "data_offset": 256, 00:23:08.081 "data_size": 7936 00:23:08.081 }, 00:23:08.081 { 00:23:08.081 "name": "pt2", 00:23:08.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.081 "is_configured": true, 00:23:08.081 "data_offset": 256, 00:23:08.081 "data_size": 7936 00:23:08.081 } 00:23:08.081 ] 00:23:08.081 } 00:23:08.081 } 00:23:08.081 }' 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:08.081 pt2' 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.081 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:08.082 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:08.082 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:08.082 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:08.082 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:08.082 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.082 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:08.341 [2024-11-27 14:22:38.641119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0efbac89-63b8-4ec2-9f8e-57e59e391fbd 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 0efbac89-63b8-4ec2-9f8e-57e59e391fbd ']' 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.341 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.341 [2024-11-27 14:22:38.692795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.342 [2024-11-27 14:22:38.692871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:08.342 [2024-11-27 14:22:38.692984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.342 [2024-11-27 14:22:38.693064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.342 [2024-11-27 14:22:38.693085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.342 [2024-11-27 14:22:38.832854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:08.342 [2024-11-27 14:22:38.835589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:08.342 [2024-11-27 14:22:38.835722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:08.342 [2024-11-27 14:22:38.835810] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:08.342 [2024-11-27 14:22:38.835870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.342 [2024-11-27 14:22:38.835887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:08.342 request: 00:23:08.342 { 00:23:08.342 "name": "raid_bdev1", 00:23:08.342 "raid_level": "raid1", 00:23:08.342 "base_bdevs": [ 00:23:08.342 "malloc1", 00:23:08.342 "malloc2" 00:23:08.342 ], 00:23:08.342 "superblock": false, 00:23:08.342 "method": "bdev_raid_create", 00:23:08.342 "req_id": 1 00:23:08.342 } 00:23:08.342 Got JSON-RPC error response 00:23:08.342 response: 00:23:08.342 { 00:23:08.342 "code": -17, 00:23:08.342 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:08.342 } 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.342 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.615 [2024-11-27 14:22:38.892863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:08.615 [2024-11-27 14:22:38.892934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.615 [2024-11-27 14:22:38.892958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:08.615 [2024-11-27 14:22:38.892975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.615 [2024-11-27 14:22:38.895778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.615 [2024-11-27 14:22:38.895867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:08.615 [2024-11-27 14:22:38.895936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:08.615 [2024-11-27 14:22:38.896009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:08.615 pt1 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.615 "name": "raid_bdev1", 00:23:08.615 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:08.615 "strip_size_kb": 0, 00:23:08.615 "state": "configuring", 00:23:08.615 "raid_level": "raid1", 00:23:08.615 "superblock": true, 00:23:08.615 "num_base_bdevs": 2, 00:23:08.615 "num_base_bdevs_discovered": 1, 00:23:08.615 "num_base_bdevs_operational": 2, 00:23:08.615 "base_bdevs_list": [ 00:23:08.615 { 00:23:08.615 "name": "pt1", 00:23:08.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.615 "is_configured": true, 00:23:08.615 "data_offset": 256, 00:23:08.615 "data_size": 7936 00:23:08.615 }, 00:23:08.615 { 00:23:08.615 "name": null, 00:23:08.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.615 "is_configured": false, 00:23:08.615 "data_offset": 256, 00:23:08.615 "data_size": 7936 00:23:08.615 } 00:23:08.615 ] 00:23:08.615 }' 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.615 14:22:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.180 [2024-11-27 14:22:39.425073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:09.180 [2024-11-27 14:22:39.425167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.180 [2024-11-27 14:22:39.425198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:09.180 [2024-11-27 14:22:39.425231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.180 [2024-11-27 14:22:39.425522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.180 [2024-11-27 14:22:39.425552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:09.180 [2024-11-27 14:22:39.425619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:09.180 [2024-11-27 14:22:39.425655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:09.180 [2024-11-27 14:22:39.425768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:09.180 [2024-11-27 14:22:39.425789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:09.180 [2024-11-27 14:22:39.425925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:09.180 [2024-11-27 14:22:39.426021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:09.180 [2024-11-27 14:22:39.426037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:09.180 [2024-11-27 14:22:39.426123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.180 pt2 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.180 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.181 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.181 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.181 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.181 "name": "raid_bdev1", 00:23:09.181 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:09.181 "strip_size_kb": 0, 00:23:09.181 "state": "online", 00:23:09.181 "raid_level": "raid1", 00:23:09.181 "superblock": true, 00:23:09.181 "num_base_bdevs": 2, 00:23:09.181 "num_base_bdevs_discovered": 2, 00:23:09.181 "num_base_bdevs_operational": 2, 00:23:09.181 "base_bdevs_list": [ 00:23:09.181 { 00:23:09.181 "name": "pt1", 00:23:09.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.181 "is_configured": true, 00:23:09.181 "data_offset": 256, 00:23:09.181 "data_size": 7936 00:23:09.181 }, 00:23:09.181 { 00:23:09.181 "name": "pt2", 00:23:09.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.181 "is_configured": true, 00:23:09.181 "data_offset": 256, 00:23:09.181 "data_size": 7936 00:23:09.181 } 00:23:09.181 ] 00:23:09.181 }' 00:23:09.181 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.181 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.748 [2024-11-27 14:22:39.973563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:09.748 14:22:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:09.748 "name": "raid_bdev1", 00:23:09.748 "aliases": [ 00:23:09.748 "0efbac89-63b8-4ec2-9f8e-57e59e391fbd" 00:23:09.748 ], 00:23:09.748 "product_name": "Raid Volume", 00:23:09.748 "block_size": 4128, 00:23:09.748 "num_blocks": 7936, 00:23:09.748 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:09.748 "md_size": 32, 00:23:09.748 "md_interleave": true, 00:23:09.748 "dif_type": 0, 00:23:09.748 "assigned_rate_limits": { 00:23:09.748 "rw_ios_per_sec": 0, 00:23:09.748 "rw_mbytes_per_sec": 0, 00:23:09.748 "r_mbytes_per_sec": 0, 00:23:09.748 "w_mbytes_per_sec": 0 00:23:09.748 }, 00:23:09.748 "claimed": false, 00:23:09.748 "zoned": false, 00:23:09.748 "supported_io_types": { 00:23:09.748 "read": true, 00:23:09.748 "write": true, 00:23:09.748 "unmap": false, 00:23:09.748 "flush": false, 00:23:09.748 "reset": true, 00:23:09.748 "nvme_admin": false, 00:23:09.748 "nvme_io": false, 00:23:09.748 "nvme_io_md": false, 00:23:09.748 "write_zeroes": true, 00:23:09.748 "zcopy": false, 00:23:09.748 "get_zone_info": false, 00:23:09.748 "zone_management": false, 00:23:09.748 "zone_append": false, 00:23:09.748 "compare": false, 00:23:09.748 "compare_and_write": false, 00:23:09.748 "abort": false, 00:23:09.748 "seek_hole": false, 00:23:09.748 "seek_data": false, 00:23:09.748 "copy": false, 00:23:09.748 "nvme_iov_md": false 00:23:09.748 }, 00:23:09.748 "memory_domains": [ 00:23:09.748 { 00:23:09.748 "dma_device_id": "system", 00:23:09.748 "dma_device_type": 1 00:23:09.748 }, 00:23:09.748 { 00:23:09.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.748 "dma_device_type": 2 00:23:09.748 }, 00:23:09.748 { 00:23:09.748 "dma_device_id": "system", 00:23:09.748 "dma_device_type": 1 00:23:09.748 }, 00:23:09.748 { 00:23:09.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.748 "dma_device_type": 2 00:23:09.748 } 00:23:09.748 ], 00:23:09.748 "driver_specific": { 00:23:09.748 "raid": { 00:23:09.748 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:09.748 "strip_size_kb": 0, 00:23:09.748 "state": "online", 00:23:09.748 "raid_level": "raid1", 00:23:09.748 "superblock": true, 00:23:09.748 "num_base_bdevs": 2, 00:23:09.748 "num_base_bdevs_discovered": 2, 00:23:09.748 "num_base_bdevs_operational": 2, 00:23:09.748 "base_bdevs_list": [ 00:23:09.748 { 00:23:09.748 "name": "pt1", 00:23:09.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.748 "is_configured": true, 00:23:09.748 "data_offset": 256, 00:23:09.748 "data_size": 7936 00:23:09.748 }, 00:23:09.748 { 00:23:09.748 "name": "pt2", 00:23:09.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.748 "is_configured": true, 00:23:09.748 "data_offset": 256, 00:23:09.748 "data_size": 7936 00:23:09.748 } 00:23:09.748 ] 00:23:09.748 } 00:23:09.748 } 00:23:09.748 }' 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:09.748 pt2' 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.748 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.749 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.749 [2024-11-27 14:22:40.249659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 0efbac89-63b8-4ec2-9f8e-57e59e391fbd '!=' 0efbac89-63b8-4ec2-9f8e-57e59e391fbd ']' 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.007 [2024-11-27 14:22:40.325405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.007 "name": "raid_bdev1", 00:23:10.007 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:10.007 "strip_size_kb": 0, 00:23:10.007 "state": "online", 00:23:10.007 "raid_level": "raid1", 00:23:10.007 "superblock": true, 00:23:10.007 "num_base_bdevs": 2, 00:23:10.007 "num_base_bdevs_discovered": 1, 00:23:10.007 "num_base_bdevs_operational": 1, 00:23:10.007 "base_bdevs_list": [ 00:23:10.007 { 00:23:10.007 "name": null, 00:23:10.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.007 "is_configured": false, 00:23:10.007 "data_offset": 0, 00:23:10.007 "data_size": 7936 00:23:10.007 }, 00:23:10.007 { 00:23:10.007 "name": "pt2", 00:23:10.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.007 "is_configured": true, 00:23:10.007 "data_offset": 256, 00:23:10.007 "data_size": 7936 00:23:10.007 } 00:23:10.007 ] 00:23:10.007 }' 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.007 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.574 [2024-11-27 14:22:40.845536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:10.574 [2024-11-27 14:22:40.845572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.574 [2024-11-27 14:22:40.845698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.574 [2024-11-27 14:22:40.845763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.574 [2024-11-27 14:22:40.845782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.574 [2024-11-27 14:22:40.921543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:10.574 [2024-11-27 14:22:40.921654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.574 [2024-11-27 14:22:40.921677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:10.574 [2024-11-27 14:22:40.921693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.574 [2024-11-27 14:22:40.924457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.574 [2024-11-27 14:22:40.924536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:10.574 [2024-11-27 14:22:40.924605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:10.574 [2024-11-27 14:22:40.924683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:10.574 [2024-11-27 14:22:40.924804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:10.574 [2024-11-27 14:22:40.924826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:10.574 [2024-11-27 14:22:40.924960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:10.574 [2024-11-27 14:22:40.925062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:10.574 [2024-11-27 14:22:40.925077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:10.574 [2024-11-27 14:22:40.925161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.574 pt2 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.574 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.575 "name": "raid_bdev1", 00:23:10.575 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:10.575 "strip_size_kb": 0, 00:23:10.575 "state": "online", 00:23:10.575 "raid_level": "raid1", 00:23:10.575 "superblock": true, 00:23:10.575 "num_base_bdevs": 2, 00:23:10.575 "num_base_bdevs_discovered": 1, 00:23:10.575 "num_base_bdevs_operational": 1, 00:23:10.575 "base_bdevs_list": [ 00:23:10.575 { 00:23:10.575 "name": null, 00:23:10.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.575 "is_configured": false, 00:23:10.575 "data_offset": 256, 00:23:10.575 "data_size": 7936 00:23:10.575 }, 00:23:10.575 { 00:23:10.575 "name": "pt2", 00:23:10.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.575 "is_configured": true, 00:23:10.575 "data_offset": 256, 00:23:10.575 "data_size": 7936 00:23:10.575 } 00:23:10.575 ] 00:23:10.575 }' 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.575 14:22:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.140 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:11.140 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.140 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.140 [2024-11-27 14:22:41.437700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.140 [2024-11-27 14:22:41.437925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.140 [2024-11-27 14:22:41.438040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.140 [2024-11-27 14:22:41.438116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.140 [2024-11-27 14:22:41.438133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.141 [2024-11-27 14:22:41.501712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:11.141 [2024-11-27 14:22:41.501949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.141 [2024-11-27 14:22:41.502027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:11.141 [2024-11-27 14:22:41.502271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.141 [2024-11-27 14:22:41.504996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.141 [2024-11-27 14:22:41.505042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:11.141 [2024-11-27 14:22:41.505118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:11.141 [2024-11-27 14:22:41.505179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:11.141 [2024-11-27 14:22:41.505344] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:11.141 [2024-11-27 14:22:41.505362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.141 [2024-11-27 14:22:41.505385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:11.141 [2024-11-27 14:22:41.505455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:11.141 [2024-11-27 14:22:41.505561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:11.141 [2024-11-27 14:22:41.505576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:11.141 [2024-11-27 14:22:41.505694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:11.141 [2024-11-27 14:22:41.505777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:11.141 [2024-11-27 14:22:41.505796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:11.141 [2024-11-27 14:22:41.505956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.141 pt1 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.141 "name": "raid_bdev1", 00:23:11.141 "uuid": "0efbac89-63b8-4ec2-9f8e-57e59e391fbd", 00:23:11.141 "strip_size_kb": 0, 00:23:11.141 "state": "online", 00:23:11.141 "raid_level": "raid1", 00:23:11.141 "superblock": true, 00:23:11.141 "num_base_bdevs": 2, 00:23:11.141 "num_base_bdevs_discovered": 1, 00:23:11.141 "num_base_bdevs_operational": 1, 00:23:11.141 "base_bdevs_list": [ 00:23:11.141 { 00:23:11.141 "name": null, 00:23:11.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.141 "is_configured": false, 00:23:11.141 "data_offset": 256, 00:23:11.141 "data_size": 7936 00:23:11.141 }, 00:23:11.141 { 00:23:11.141 "name": "pt2", 00:23:11.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.141 "is_configured": true, 00:23:11.141 "data_offset": 256, 00:23:11.141 "data_size": 7936 00:23:11.141 } 00:23:11.141 ] 00:23:11.141 }' 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.141 14:22:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.706 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:11.706 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:11.706 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.706 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.706 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 [2024-11-27 14:22:42.098460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 0efbac89-63b8-4ec2-9f8e-57e59e391fbd '!=' 0efbac89-63b8-4ec2-9f8e-57e59e391fbd ']' 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89416 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89416 ']' 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89416 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89416 00:23:11.707 killing process with pid 89416 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89416' 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89416 00:23:11.707 [2024-11-27 14:22:42.182832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:11.707 [2024-11-27 14:22:42.182945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.707 14:22:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89416 00:23:11.707 [2024-11-27 14:22:42.183013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.707 [2024-11-27 14:22:42.183046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:11.964 [2024-11-27 14:22:42.369519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:13.339 14:22:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:23:13.339 00:23:13.339 real 0m6.842s 00:23:13.339 user 0m10.882s 00:23:13.339 sys 0m1.004s 00:23:13.339 14:22:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.339 14:22:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.339 ************************************ 00:23:13.339 END TEST raid_superblock_test_md_interleaved 00:23:13.339 ************************************ 00:23:13.339 14:22:43 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:13.339 14:22:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:13.339 14:22:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.339 14:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.339 ************************************ 00:23:13.339 START TEST raid_rebuild_test_sb_md_interleaved 00:23:13.339 ************************************ 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89750 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89750 00:23:13.339 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89750 ']' 00:23:13.340 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.340 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.340 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.340 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.340 14:22:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.340 [2024-11-27 14:22:43.600337] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:13.340 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:13.340 Zero copy mechanism will not be used. 00:23:13.340 [2024-11-27 14:22:43.600856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89750 ] 00:23:13.340 [2024-11-27 14:22:43.788694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.656 [2024-11-27 14:22:43.920140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.656 [2024-11-27 14:22:44.131542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.656 [2024-11-27 14:22:44.131745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.223 BaseBdev1_malloc 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.223 [2024-11-27 14:22:44.703607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:14.223 [2024-11-27 14:22:44.703743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.223 [2024-11-27 14:22:44.703958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:14.223 [2024-11-27 14:22:44.704025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.223 [2024-11-27 14:22:44.706846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.223 [2024-11-27 14:22:44.707071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:14.223 BaseBdev1 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.223 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.482 BaseBdev2_malloc 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.482 [2024-11-27 14:22:44.757268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:14.482 [2024-11-27 14:22:44.757538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.482 [2024-11-27 14:22:44.757615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:14.482 [2024-11-27 14:22:44.757934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.482 [2024-11-27 14:22:44.760585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.482 [2024-11-27 14:22:44.760802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:14.482 BaseBdev2 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.482 spare_malloc 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.482 spare_delay 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.482 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.482 [2024-11-27 14:22:44.831062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:14.482 [2024-11-27 14:22:44.831185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.482 [2024-11-27 14:22:44.831340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:14.482 [2024-11-27 14:22:44.831374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.483 [2024-11-27 14:22:44.834120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.483 [2024-11-27 14:22:44.834207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:14.483 spare 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.483 [2024-11-27 14:22:44.839188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.483 [2024-11-27 14:22:44.842145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:14.483 [2024-11-27 14:22:44.842550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:14.483 [2024-11-27 14:22:44.842725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:14.483 [2024-11-27 14:22:44.842897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:14.483 [2024-11-27 14:22:44.843122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:14.483 [2024-11-27 14:22:44.843238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:14.483 [2024-11-27 14:22:44.843540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.483 "name": "raid_bdev1", 00:23:14.483 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:14.483 "strip_size_kb": 0, 00:23:14.483 "state": "online", 00:23:14.483 "raid_level": "raid1", 00:23:14.483 "superblock": true, 00:23:14.483 "num_base_bdevs": 2, 00:23:14.483 "num_base_bdevs_discovered": 2, 00:23:14.483 "num_base_bdevs_operational": 2, 00:23:14.483 "base_bdevs_list": [ 00:23:14.483 { 00:23:14.483 "name": "BaseBdev1", 00:23:14.483 "uuid": "a16e8722-f1b9-555f-81b1-260e3108d185", 00:23:14.483 "is_configured": true, 00:23:14.483 "data_offset": 256, 00:23:14.483 "data_size": 7936 00:23:14.483 }, 00:23:14.483 { 00:23:14.483 "name": "BaseBdev2", 00:23:14.483 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:14.483 "is_configured": true, 00:23:14.483 "data_offset": 256, 00:23:14.483 "data_size": 7936 00:23:14.483 } 00:23:14.483 ] 00:23:14.483 }' 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.483 14:22:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.052 [2024-11-27 14:22:45.384187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:23:15.052 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.053 [2024-11-27 14:22:45.499740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.053 "name": "raid_bdev1", 00:23:15.053 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:15.053 "strip_size_kb": 0, 00:23:15.053 "state": "online", 00:23:15.053 "raid_level": "raid1", 00:23:15.053 "superblock": true, 00:23:15.053 "num_base_bdevs": 2, 00:23:15.053 "num_base_bdevs_discovered": 1, 00:23:15.053 "num_base_bdevs_operational": 1, 00:23:15.053 "base_bdevs_list": [ 00:23:15.053 { 00:23:15.053 "name": null, 00:23:15.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.053 "is_configured": false, 00:23:15.053 "data_offset": 0, 00:23:15.053 "data_size": 7936 00:23:15.053 }, 00:23:15.053 { 00:23:15.053 "name": "BaseBdev2", 00:23:15.053 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:15.053 "is_configured": true, 00:23:15.053 "data_offset": 256, 00:23:15.053 "data_size": 7936 00:23:15.053 } 00:23:15.053 ] 00:23:15.053 }' 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.053 14:22:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.618 14:22:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:15.618 14:22:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.618 14:22:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.618 [2024-11-27 14:22:46.012022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.618 [2024-11-27 14:22:46.030273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:15.618 14:22:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.618 14:22:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:15.618 [2024-11-27 14:22:46.033071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.555 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:16.814 "name": "raid_bdev1", 00:23:16.814 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:16.814 "strip_size_kb": 0, 00:23:16.814 "state": "online", 00:23:16.814 "raid_level": "raid1", 00:23:16.814 "superblock": true, 00:23:16.814 "num_base_bdevs": 2, 00:23:16.814 "num_base_bdevs_discovered": 2, 00:23:16.814 "num_base_bdevs_operational": 2, 00:23:16.814 "process": { 00:23:16.814 "type": "rebuild", 00:23:16.814 "target": "spare", 00:23:16.814 "progress": { 00:23:16.814 "blocks": 2560, 00:23:16.814 "percent": 32 00:23:16.814 } 00:23:16.814 }, 00:23:16.814 "base_bdevs_list": [ 00:23:16.814 { 00:23:16.814 "name": "spare", 00:23:16.814 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:16.814 "is_configured": true, 00:23:16.814 "data_offset": 256, 00:23:16.814 "data_size": 7936 00:23:16.814 }, 00:23:16.814 { 00:23:16.814 "name": "BaseBdev2", 00:23:16.814 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:16.814 "is_configured": true, 00:23:16.814 "data_offset": 256, 00:23:16.814 "data_size": 7936 00:23:16.814 } 00:23:16.814 ] 00:23:16.814 }' 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.814 [2024-11-27 14:22:47.202726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:16.814 [2024-11-27 14:22:47.242710] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:16.814 [2024-11-27 14:22:47.243025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.814 [2024-11-27 14:22:47.243054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:16.814 [2024-11-27 14:22:47.243074] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.814 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.073 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.073 "name": "raid_bdev1", 00:23:17.073 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:17.073 "strip_size_kb": 0, 00:23:17.073 "state": "online", 00:23:17.073 "raid_level": "raid1", 00:23:17.073 "superblock": true, 00:23:17.073 "num_base_bdevs": 2, 00:23:17.073 "num_base_bdevs_discovered": 1, 00:23:17.073 "num_base_bdevs_operational": 1, 00:23:17.073 "base_bdevs_list": [ 00:23:17.073 { 00:23:17.073 "name": null, 00:23:17.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.073 "is_configured": false, 00:23:17.073 "data_offset": 0, 00:23:17.073 "data_size": 7936 00:23:17.073 }, 00:23:17.073 { 00:23:17.073 "name": "BaseBdev2", 00:23:17.073 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:17.073 "is_configured": true, 00:23:17.073 "data_offset": 256, 00:23:17.073 "data_size": 7936 00:23:17.073 } 00:23:17.073 ] 00:23:17.073 }' 00:23:17.073 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.073 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:17.331 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.590 "name": "raid_bdev1", 00:23:17.590 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:17.590 "strip_size_kb": 0, 00:23:17.590 "state": "online", 00:23:17.590 "raid_level": "raid1", 00:23:17.590 "superblock": true, 00:23:17.590 "num_base_bdevs": 2, 00:23:17.590 "num_base_bdevs_discovered": 1, 00:23:17.590 "num_base_bdevs_operational": 1, 00:23:17.590 "base_bdevs_list": [ 00:23:17.590 { 00:23:17.590 "name": null, 00:23:17.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.590 "is_configured": false, 00:23:17.590 "data_offset": 0, 00:23:17.590 "data_size": 7936 00:23:17.590 }, 00:23:17.590 { 00:23:17.590 "name": "BaseBdev2", 00:23:17.590 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:17.590 "is_configured": true, 00:23:17.590 "data_offset": 256, 00:23:17.590 "data_size": 7936 00:23:17.590 } 00:23:17.590 ] 00:23:17.590 }' 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:17.590 [2024-11-27 14:22:47.961423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.590 [2024-11-27 14:22:47.977658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.590 14:22:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:17.590 [2024-11-27 14:22:47.980439] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.526 14:22:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:18.526 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.790 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.790 "name": "raid_bdev1", 00:23:18.790 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:18.790 "strip_size_kb": 0, 00:23:18.790 "state": "online", 00:23:18.790 "raid_level": "raid1", 00:23:18.790 "superblock": true, 00:23:18.790 "num_base_bdevs": 2, 00:23:18.790 "num_base_bdevs_discovered": 2, 00:23:18.790 "num_base_bdevs_operational": 2, 00:23:18.790 "process": { 00:23:18.790 "type": "rebuild", 00:23:18.790 "target": "spare", 00:23:18.790 "progress": { 00:23:18.790 "blocks": 2560, 00:23:18.790 "percent": 32 00:23:18.790 } 00:23:18.790 }, 00:23:18.790 "base_bdevs_list": [ 00:23:18.790 { 00:23:18.790 "name": "spare", 00:23:18.790 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:18.790 "is_configured": true, 00:23:18.790 "data_offset": 256, 00:23:18.790 "data_size": 7936 00:23:18.790 }, 00:23:18.790 { 00:23:18.791 "name": "BaseBdev2", 00:23:18.791 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:18.791 "is_configured": true, 00:23:18.791 "data_offset": 256, 00:23:18.791 "data_size": 7936 00:23:18.791 } 00:23:18.791 ] 00:23:18.791 }' 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:18.791 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=811 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.791 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.792 "name": "raid_bdev1", 00:23:18.792 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:18.792 "strip_size_kb": 0, 00:23:18.792 "state": "online", 00:23:18.792 "raid_level": "raid1", 00:23:18.792 "superblock": true, 00:23:18.792 "num_base_bdevs": 2, 00:23:18.792 "num_base_bdevs_discovered": 2, 00:23:18.792 "num_base_bdevs_operational": 2, 00:23:18.792 "process": { 00:23:18.792 "type": "rebuild", 00:23:18.792 "target": "spare", 00:23:18.792 "progress": { 00:23:18.792 "blocks": 2816, 00:23:18.792 "percent": 35 00:23:18.792 } 00:23:18.792 }, 00:23:18.792 "base_bdevs_list": [ 00:23:18.792 { 00:23:18.792 "name": "spare", 00:23:18.792 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:18.792 "is_configured": true, 00:23:18.792 "data_offset": 256, 00:23:18.792 "data_size": 7936 00:23:18.792 }, 00:23:18.792 { 00:23:18.792 "name": "BaseBdev2", 00:23:18.792 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:18.792 "is_configured": true, 00:23:18.792 "data_offset": 256, 00:23:18.792 "data_size": 7936 00:23:18.792 } 00:23:18.792 ] 00:23:18.792 }' 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.792 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:19.051 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.051 14:22:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:19.986 "name": "raid_bdev1", 00:23:19.986 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:19.986 "strip_size_kb": 0, 00:23:19.986 "state": "online", 00:23:19.986 "raid_level": "raid1", 00:23:19.986 "superblock": true, 00:23:19.986 "num_base_bdevs": 2, 00:23:19.986 "num_base_bdevs_discovered": 2, 00:23:19.986 "num_base_bdevs_operational": 2, 00:23:19.986 "process": { 00:23:19.986 "type": "rebuild", 00:23:19.986 "target": "spare", 00:23:19.986 "progress": { 00:23:19.986 "blocks": 5888, 00:23:19.986 "percent": 74 00:23:19.986 } 00:23:19.986 }, 00:23:19.986 "base_bdevs_list": [ 00:23:19.986 { 00:23:19.986 "name": "spare", 00:23:19.986 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:19.986 "is_configured": true, 00:23:19.986 "data_offset": 256, 00:23:19.986 "data_size": 7936 00:23:19.986 }, 00:23:19.986 { 00:23:19.986 "name": "BaseBdev2", 00:23:19.986 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:19.986 "is_configured": true, 00:23:19.986 "data_offset": 256, 00:23:19.986 "data_size": 7936 00:23:19.986 } 00:23:19.986 ] 00:23:19.986 }' 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.986 14:22:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:20.923 [2024-11-27 14:22:51.104810] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:20.923 [2024-11-27 14:22:51.104973] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:20.923 [2024-11-27 14:22:51.105142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.182 "name": "raid_bdev1", 00:23:21.182 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:21.182 "strip_size_kb": 0, 00:23:21.182 "state": "online", 00:23:21.182 "raid_level": "raid1", 00:23:21.182 "superblock": true, 00:23:21.182 "num_base_bdevs": 2, 00:23:21.182 "num_base_bdevs_discovered": 2, 00:23:21.182 "num_base_bdevs_operational": 2, 00:23:21.182 "base_bdevs_list": [ 00:23:21.182 { 00:23:21.182 "name": "spare", 00:23:21.182 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:21.182 "is_configured": true, 00:23:21.182 "data_offset": 256, 00:23:21.182 "data_size": 7936 00:23:21.182 }, 00:23:21.182 { 00:23:21.182 "name": "BaseBdev2", 00:23:21.182 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:21.182 "is_configured": true, 00:23:21.182 "data_offset": 256, 00:23:21.182 "data_size": 7936 00:23:21.182 } 00:23:21.182 ] 00:23:21.182 }' 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:21.182 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.441 "name": "raid_bdev1", 00:23:21.441 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:21.441 "strip_size_kb": 0, 00:23:21.441 "state": "online", 00:23:21.441 "raid_level": "raid1", 00:23:21.441 "superblock": true, 00:23:21.441 "num_base_bdevs": 2, 00:23:21.441 "num_base_bdevs_discovered": 2, 00:23:21.441 "num_base_bdevs_operational": 2, 00:23:21.441 "base_bdevs_list": [ 00:23:21.441 { 00:23:21.441 "name": "spare", 00:23:21.441 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:21.441 "is_configured": true, 00:23:21.441 "data_offset": 256, 00:23:21.441 "data_size": 7936 00:23:21.441 }, 00:23:21.441 { 00:23:21.441 "name": "BaseBdev2", 00:23:21.441 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:21.441 "is_configured": true, 00:23:21.441 "data_offset": 256, 00:23:21.441 "data_size": 7936 00:23:21.441 } 00:23:21.441 ] 00:23:21.441 }' 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.441 "name": "raid_bdev1", 00:23:21.441 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:21.441 "strip_size_kb": 0, 00:23:21.441 "state": "online", 00:23:21.441 "raid_level": "raid1", 00:23:21.441 "superblock": true, 00:23:21.441 "num_base_bdevs": 2, 00:23:21.441 "num_base_bdevs_discovered": 2, 00:23:21.441 "num_base_bdevs_operational": 2, 00:23:21.441 "base_bdevs_list": [ 00:23:21.441 { 00:23:21.441 "name": "spare", 00:23:21.441 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:21.441 "is_configured": true, 00:23:21.441 "data_offset": 256, 00:23:21.441 "data_size": 7936 00:23:21.441 }, 00:23:21.441 { 00:23:21.441 "name": "BaseBdev2", 00:23:21.441 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:21.441 "is_configured": true, 00:23:21.441 "data_offset": 256, 00:23:21.441 "data_size": 7936 00:23:21.441 } 00:23:21.441 ] 00:23:21.441 }' 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.441 14:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.010 [2024-11-27 14:22:52.321558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.010 [2024-11-27 14:22:52.321604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.010 [2024-11-27 14:22:52.321724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.010 [2024-11-27 14:22:52.321845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.010 [2024-11-27 14:22:52.321865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:22.010 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.011 [2024-11-27 14:22:52.397546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:22.011 [2024-11-27 14:22:52.397862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.011 [2024-11-27 14:22:52.397919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:22.011 [2024-11-27 14:22:52.397936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.011 [2024-11-27 14:22:52.400725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.011 spare 00:23:22.011 [2024-11-27 14:22:52.400910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:22.011 [2024-11-27 14:22:52.401011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:22.011 [2024-11-27 14:22:52.401081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:22.011 [2024-11-27 14:22:52.401241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.011 [2024-11-27 14:22:52.501367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:22.011 [2024-11-27 14:22:52.501414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:22.011 [2024-11-27 14:22:52.501564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:22.011 [2024-11-27 14:22:52.501737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:22.011 [2024-11-27 14:22:52.501755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:22.011 [2024-11-27 14:22:52.501890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.011 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.270 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.270 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.270 "name": "raid_bdev1", 00:23:22.270 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:22.270 "strip_size_kb": 0, 00:23:22.270 "state": "online", 00:23:22.270 "raid_level": "raid1", 00:23:22.270 "superblock": true, 00:23:22.270 "num_base_bdevs": 2, 00:23:22.270 "num_base_bdevs_discovered": 2, 00:23:22.270 "num_base_bdevs_operational": 2, 00:23:22.270 "base_bdevs_list": [ 00:23:22.270 { 00:23:22.270 "name": "spare", 00:23:22.270 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:22.270 "is_configured": true, 00:23:22.270 "data_offset": 256, 00:23:22.270 "data_size": 7936 00:23:22.270 }, 00:23:22.270 { 00:23:22.270 "name": "BaseBdev2", 00:23:22.270 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:22.270 "is_configured": true, 00:23:22.270 "data_offset": 256, 00:23:22.270 "data_size": 7936 00:23:22.270 } 00:23:22.270 ] 00:23:22.270 }' 00:23:22.270 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.270 14:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.530 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.789 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.789 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.789 "name": "raid_bdev1", 00:23:22.789 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:22.789 "strip_size_kb": 0, 00:23:22.789 "state": "online", 00:23:22.789 "raid_level": "raid1", 00:23:22.789 "superblock": true, 00:23:22.789 "num_base_bdevs": 2, 00:23:22.789 "num_base_bdevs_discovered": 2, 00:23:22.789 "num_base_bdevs_operational": 2, 00:23:22.789 "base_bdevs_list": [ 00:23:22.789 { 00:23:22.789 "name": "spare", 00:23:22.789 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:22.789 "is_configured": true, 00:23:22.789 "data_offset": 256, 00:23:22.789 "data_size": 7936 00:23:22.789 }, 00:23:22.789 { 00:23:22.789 "name": "BaseBdev2", 00:23:22.789 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:22.789 "is_configured": true, 00:23:22.789 "data_offset": 256, 00:23:22.789 "data_size": 7936 00:23:22.789 } 00:23:22.789 ] 00:23:22.789 }' 00:23:22.789 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.790 [2024-11-27 14:22:53.270311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.790 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.049 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.049 "name": "raid_bdev1", 00:23:23.049 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:23.049 "strip_size_kb": 0, 00:23:23.049 "state": "online", 00:23:23.049 "raid_level": "raid1", 00:23:23.049 "superblock": true, 00:23:23.049 "num_base_bdevs": 2, 00:23:23.049 "num_base_bdevs_discovered": 1, 00:23:23.049 "num_base_bdevs_operational": 1, 00:23:23.049 "base_bdevs_list": [ 00:23:23.049 { 00:23:23.049 "name": null, 00:23:23.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.049 "is_configured": false, 00:23:23.049 "data_offset": 0, 00:23:23.049 "data_size": 7936 00:23:23.049 }, 00:23:23.049 { 00:23:23.049 "name": "BaseBdev2", 00:23:23.049 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:23.049 "is_configured": true, 00:23:23.049 "data_offset": 256, 00:23:23.049 "data_size": 7936 00:23:23.049 } 00:23:23.049 ] 00:23:23.049 }' 00:23:23.049 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.049 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:23.307 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:23.308 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.308 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:23.308 [2024-11-27 14:22:53.806440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:23.308 [2024-11-27 14:22:53.806771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:23.308 [2024-11-27 14:22:53.806803] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:23.308 [2024-11-27 14:22:53.806930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:23.566 [2024-11-27 14:22:53.823922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:23.566 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.566 14:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:23.566 [2024-11-27 14:22:53.826600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.511 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:24.511 "name": "raid_bdev1", 00:23:24.511 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:24.511 "strip_size_kb": 0, 00:23:24.511 "state": "online", 00:23:24.511 "raid_level": "raid1", 00:23:24.511 "superblock": true, 00:23:24.511 "num_base_bdevs": 2, 00:23:24.511 "num_base_bdevs_discovered": 2, 00:23:24.511 "num_base_bdevs_operational": 2, 00:23:24.511 "process": { 00:23:24.511 "type": "rebuild", 00:23:24.511 "target": "spare", 00:23:24.511 "progress": { 00:23:24.511 "blocks": 2560, 00:23:24.511 "percent": 32 00:23:24.511 } 00:23:24.511 }, 00:23:24.511 "base_bdevs_list": [ 00:23:24.511 { 00:23:24.511 "name": "spare", 00:23:24.511 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:24.511 "is_configured": true, 00:23:24.511 "data_offset": 256, 00:23:24.511 "data_size": 7936 00:23:24.511 }, 00:23:24.511 { 00:23:24.512 "name": "BaseBdev2", 00:23:24.512 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:24.512 "is_configured": true, 00:23:24.512 "data_offset": 256, 00:23:24.512 "data_size": 7936 00:23:24.512 } 00:23:24.512 ] 00:23:24.512 }' 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.512 14:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:24.512 [2024-11-27 14:22:54.964296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:24.771 [2024-11-27 14:22:55.036284] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:24.771 [2024-11-27 14:22:55.036389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.771 [2024-11-27 14:22:55.036420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:24.771 [2024-11-27 14:22:55.036435] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.771 "name": "raid_bdev1", 00:23:24.771 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:24.771 "strip_size_kb": 0, 00:23:24.771 "state": "online", 00:23:24.771 "raid_level": "raid1", 00:23:24.771 "superblock": true, 00:23:24.771 "num_base_bdevs": 2, 00:23:24.771 "num_base_bdevs_discovered": 1, 00:23:24.771 "num_base_bdevs_operational": 1, 00:23:24.771 "base_bdevs_list": [ 00:23:24.771 { 00:23:24.771 "name": null, 00:23:24.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.771 "is_configured": false, 00:23:24.771 "data_offset": 0, 00:23:24.771 "data_size": 7936 00:23:24.771 }, 00:23:24.771 { 00:23:24.771 "name": "BaseBdev2", 00:23:24.771 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:24.771 "is_configured": true, 00:23:24.771 "data_offset": 256, 00:23:24.771 "data_size": 7936 00:23:24.771 } 00:23:24.771 ] 00:23:24.771 }' 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.771 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.339 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:25.339 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.339 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.339 [2024-11-27 14:22:55.553420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:25.339 [2024-11-27 14:22:55.553661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.339 [2024-11-27 14:22:55.553745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:25.339 [2024-11-27 14:22:55.554002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.339 [2024-11-27 14:22:55.554311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.339 [2024-11-27 14:22:55.554343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:25.339 [2024-11-27 14:22:55.554430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:25.339 [2024-11-27 14:22:55.554455] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:25.339 [2024-11-27 14:22:55.554470] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:25.339 [2024-11-27 14:22:55.554502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:25.339 [2024-11-27 14:22:55.571744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:25.339 spare 00:23:25.339 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.339 14:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:25.339 [2024-11-27 14:22:55.574545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:26.276 "name": "raid_bdev1", 00:23:26.276 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:26.276 "strip_size_kb": 0, 00:23:26.276 "state": "online", 00:23:26.276 "raid_level": "raid1", 00:23:26.276 "superblock": true, 00:23:26.276 "num_base_bdevs": 2, 00:23:26.276 "num_base_bdevs_discovered": 2, 00:23:26.276 "num_base_bdevs_operational": 2, 00:23:26.276 "process": { 00:23:26.276 "type": "rebuild", 00:23:26.276 "target": "spare", 00:23:26.276 "progress": { 00:23:26.276 "blocks": 2560, 00:23:26.276 "percent": 32 00:23:26.276 } 00:23:26.276 }, 00:23:26.276 "base_bdevs_list": [ 00:23:26.276 { 00:23:26.276 "name": "spare", 00:23:26.276 "uuid": "7667c874-2c8e-5ae8-b722-add70697af18", 00:23:26.276 "is_configured": true, 00:23:26.276 "data_offset": 256, 00:23:26.276 "data_size": 7936 00:23:26.276 }, 00:23:26.276 { 00:23:26.276 "name": "BaseBdev2", 00:23:26.276 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:26.276 "is_configured": true, 00:23:26.276 "data_offset": 256, 00:23:26.276 "data_size": 7936 00:23:26.276 } 00:23:26.276 ] 00:23:26.276 }' 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.276 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.276 [2024-11-27 14:22:56.744483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:26.276 [2024-11-27 14:22:56.783978] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:26.276 [2024-11-27 14:22:56.784223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.276 [2024-11-27 14:22:56.784257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:26.276 [2024-11-27 14:22:56.784286] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.535 "name": "raid_bdev1", 00:23:26.535 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:26.535 "strip_size_kb": 0, 00:23:26.535 "state": "online", 00:23:26.535 "raid_level": "raid1", 00:23:26.535 "superblock": true, 00:23:26.535 "num_base_bdevs": 2, 00:23:26.535 "num_base_bdevs_discovered": 1, 00:23:26.535 "num_base_bdevs_operational": 1, 00:23:26.535 "base_bdevs_list": [ 00:23:26.535 { 00:23:26.535 "name": null, 00:23:26.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.535 "is_configured": false, 00:23:26.535 "data_offset": 0, 00:23:26.535 "data_size": 7936 00:23:26.535 }, 00:23:26.535 { 00:23:26.535 "name": "BaseBdev2", 00:23:26.535 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:26.535 "is_configured": true, 00:23:26.535 "data_offset": 256, 00:23:26.535 "data_size": 7936 00:23:26.535 } 00:23:26.535 ] 00:23:26.535 }' 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.535 14:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.104 "name": "raid_bdev1", 00:23:27.104 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:27.104 "strip_size_kb": 0, 00:23:27.104 "state": "online", 00:23:27.104 "raid_level": "raid1", 00:23:27.104 "superblock": true, 00:23:27.104 "num_base_bdevs": 2, 00:23:27.104 "num_base_bdevs_discovered": 1, 00:23:27.104 "num_base_bdevs_operational": 1, 00:23:27.104 "base_bdevs_list": [ 00:23:27.104 { 00:23:27.104 "name": null, 00:23:27.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.104 "is_configured": false, 00:23:27.104 "data_offset": 0, 00:23:27.104 "data_size": 7936 00:23:27.104 }, 00:23:27.104 { 00:23:27.104 "name": "BaseBdev2", 00:23:27.104 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:27.104 "is_configured": true, 00:23:27.104 "data_offset": 256, 00:23:27.104 "data_size": 7936 00:23:27.104 } 00:23:27.104 ] 00:23:27.104 }' 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 [2024-11-27 14:22:57.488817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:27.104 [2024-11-27 14:22:57.489083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.104 [2024-11-27 14:22:57.489162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:27.104 [2024-11-27 14:22:57.489366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.104 [2024-11-27 14:22:57.489607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.104 [2024-11-27 14:22:57.489646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:27.104 [2024-11-27 14:22:57.489713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:27.104 [2024-11-27 14:22:57.489733] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:27.104 [2024-11-27 14:22:57.489746] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:27.104 [2024-11-27 14:22:57.489758] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:27.104 BaseBdev1 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.104 14:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.041 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.300 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.300 "name": "raid_bdev1", 00:23:28.300 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:28.300 "strip_size_kb": 0, 00:23:28.300 "state": "online", 00:23:28.300 "raid_level": "raid1", 00:23:28.300 "superblock": true, 00:23:28.300 "num_base_bdevs": 2, 00:23:28.300 "num_base_bdevs_discovered": 1, 00:23:28.300 "num_base_bdevs_operational": 1, 00:23:28.300 "base_bdevs_list": [ 00:23:28.300 { 00:23:28.300 "name": null, 00:23:28.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.300 "is_configured": false, 00:23:28.300 "data_offset": 0, 00:23:28.300 "data_size": 7936 00:23:28.300 }, 00:23:28.300 { 00:23:28.300 "name": "BaseBdev2", 00:23:28.300 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:28.300 "is_configured": true, 00:23:28.300 "data_offset": 256, 00:23:28.300 "data_size": 7936 00:23:28.300 } 00:23:28.300 ] 00:23:28.300 }' 00:23:28.300 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.300 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.559 14:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.559 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.559 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:28.559 "name": "raid_bdev1", 00:23:28.559 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:28.559 "strip_size_kb": 0, 00:23:28.559 "state": "online", 00:23:28.559 "raid_level": "raid1", 00:23:28.559 "superblock": true, 00:23:28.559 "num_base_bdevs": 2, 00:23:28.559 "num_base_bdevs_discovered": 1, 00:23:28.559 "num_base_bdevs_operational": 1, 00:23:28.559 "base_bdevs_list": [ 00:23:28.559 { 00:23:28.559 "name": null, 00:23:28.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.559 "is_configured": false, 00:23:28.559 "data_offset": 0, 00:23:28.559 "data_size": 7936 00:23:28.559 }, 00:23:28.559 { 00:23:28.559 "name": "BaseBdev2", 00:23:28.559 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:28.559 "is_configured": true, 00:23:28.559 "data_offset": 256, 00:23:28.559 "data_size": 7936 00:23:28.559 } 00:23:28.559 ] 00:23:28.559 }' 00:23:28.559 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:28.818 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:28.818 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:28.818 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:28.818 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.819 [2024-11-27 14:22:59.157476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.819 [2024-11-27 14:22:59.157927] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:28.819 [2024-11-27 14:22:59.157966] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:28.819 request: 00:23:28.819 { 00:23:28.819 "base_bdev": "BaseBdev1", 00:23:28.819 "raid_bdev": "raid_bdev1", 00:23:28.819 "method": "bdev_raid_add_base_bdev", 00:23:28.819 "req_id": 1 00:23:28.819 } 00:23:28.819 Got JSON-RPC error response 00:23:28.819 response: 00:23:28.819 { 00:23:28.819 "code": -22, 00:23:28.819 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:28.819 } 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.819 14:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.754 "name": "raid_bdev1", 00:23:29.754 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:29.754 "strip_size_kb": 0, 00:23:29.754 "state": "online", 00:23:29.754 "raid_level": "raid1", 00:23:29.754 "superblock": true, 00:23:29.754 "num_base_bdevs": 2, 00:23:29.754 "num_base_bdevs_discovered": 1, 00:23:29.754 "num_base_bdevs_operational": 1, 00:23:29.754 "base_bdevs_list": [ 00:23:29.754 { 00:23:29.754 "name": null, 00:23:29.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.754 "is_configured": false, 00:23:29.754 "data_offset": 0, 00:23:29.754 "data_size": 7936 00:23:29.754 }, 00:23:29.754 { 00:23:29.754 "name": "BaseBdev2", 00:23:29.754 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:29.754 "is_configured": true, 00:23:29.754 "data_offset": 256, 00:23:29.754 "data_size": 7936 00:23:29.754 } 00:23:29.754 ] 00:23:29.754 }' 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.754 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.322 "name": "raid_bdev1", 00:23:30.322 "uuid": "65316373-7d69-4992-b101-5c10a4bdc87c", 00:23:30.322 "strip_size_kb": 0, 00:23:30.322 "state": "online", 00:23:30.322 "raid_level": "raid1", 00:23:30.322 "superblock": true, 00:23:30.322 "num_base_bdevs": 2, 00:23:30.322 "num_base_bdevs_discovered": 1, 00:23:30.322 "num_base_bdevs_operational": 1, 00:23:30.322 "base_bdevs_list": [ 00:23:30.322 { 00:23:30.322 "name": null, 00:23:30.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.322 "is_configured": false, 00:23:30.322 "data_offset": 0, 00:23:30.322 "data_size": 7936 00:23:30.322 }, 00:23:30.322 { 00:23:30.322 "name": "BaseBdev2", 00:23:30.322 "uuid": "c9aae763-57be-5b39-96f3-b86273aba5cc", 00:23:30.322 "is_configured": true, 00:23:30.322 "data_offset": 256, 00:23:30.322 "data_size": 7936 00:23:30.322 } 00:23:30.322 ] 00:23:30.322 }' 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89750 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89750 ']' 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89750 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.322 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89750 00:23:30.581 killing process with pid 89750 00:23:30.581 Received shutdown signal, test time was about 60.000000 seconds 00:23:30.581 00:23:30.581 Latency(us) 00:23:30.581 [2024-11-27T14:23:01.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.581 [2024-11-27T14:23:01.094Z] =================================================================================================================== 00:23:30.581 [2024-11-27T14:23:01.094Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.581 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.581 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.581 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89750' 00:23:30.581 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89750 00:23:30.581 14:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89750 00:23:30.581 [2024-11-27 14:23:00.852602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:30.581 [2024-11-27 14:23:00.852939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.581 [2024-11-27 14:23:00.853040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.581 [2024-11-27 14:23:00.853066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:30.840 [2024-11-27 14:23:01.152400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:32.219 ************************************ 00:23:32.219 END TEST raid_rebuild_test_sb_md_interleaved 00:23:32.219 ************************************ 00:23:32.219 14:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:23:32.219 00:23:32.219 real 0m18.815s 00:23:32.219 user 0m25.568s 00:23:32.219 sys 0m1.526s 00:23:32.219 14:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.219 14:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.219 14:23:02 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:23:32.219 14:23:02 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:23:32.219 14:23:02 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89750 ']' 00:23:32.219 14:23:02 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89750 00:23:32.219 14:23:02 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:23:32.219 00:23:32.219 real 13m13.910s 00:23:32.219 user 18m37.210s 00:23:32.219 sys 1m48.770s 00:23:32.219 ************************************ 00:23:32.219 END TEST bdev_raid 00:23:32.219 ************************************ 00:23:32.219 14:23:02 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.219 14:23:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.219 14:23:02 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:32.219 14:23:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.219 14:23:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.219 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:23:32.219 ************************************ 00:23:32.219 START TEST spdkcli_raid 00:23:32.219 ************************************ 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:32.219 * Looking for test storage... 00:23:32.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.219 14:23:02 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.219 --rc genhtml_branch_coverage=1 00:23:32.219 --rc genhtml_function_coverage=1 00:23:32.219 --rc genhtml_legend=1 00:23:32.219 --rc geninfo_all_blocks=1 00:23:32.219 --rc geninfo_unexecuted_blocks=1 00:23:32.219 00:23:32.219 ' 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.219 --rc genhtml_branch_coverage=1 00:23:32.219 --rc genhtml_function_coverage=1 00:23:32.219 --rc genhtml_legend=1 00:23:32.219 --rc geninfo_all_blocks=1 00:23:32.219 --rc geninfo_unexecuted_blocks=1 00:23:32.219 00:23:32.219 ' 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.219 --rc genhtml_branch_coverage=1 00:23:32.219 --rc genhtml_function_coverage=1 00:23:32.219 --rc genhtml_legend=1 00:23:32.219 --rc geninfo_all_blocks=1 00:23:32.219 --rc geninfo_unexecuted_blocks=1 00:23:32.219 00:23:32.219 ' 00:23:32.219 14:23:02 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.219 --rc genhtml_branch_coverage=1 00:23:32.219 --rc genhtml_function_coverage=1 00:23:32.219 --rc genhtml_legend=1 00:23:32.219 --rc geninfo_all_blocks=1 00:23:32.219 --rc geninfo_unexecuted_blocks=1 00:23:32.219 00:23:32.219 ' 00:23:32.219 14:23:02 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:32.219 14:23:02 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:32.219 14:23:02 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:32.219 14:23:02 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:32.219 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:32.220 14:23:02 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90431 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90431 00:23:32.220 14:23:02 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90431 ']' 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.220 14:23:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.479 [2024-11-27 14:23:02.777976] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:32.479 [2024-11-27 14:23:02.778205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90431 ] 00:23:32.479 [2024-11-27 14:23:02.965177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.821 [2024-11-27 14:23:03.112449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.821 [2024-11-27 14:23:03.112471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.770 14:23:04 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.770 14:23:04 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:23:33.770 14:23:04 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:23:33.770 14:23:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.770 14:23:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:33.770 14:23:04 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:23:33.770 14:23:04 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.770 14:23:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:33.770 14:23:04 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:33.770 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:33.770 ' 00:23:35.675 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:23:35.675 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:23:35.675 14:23:05 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:23:35.675 14:23:05 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.675 14:23:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:35.675 14:23:05 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:23:35.675 14:23:05 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.675 14:23:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:35.675 14:23:05 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:23:35.675 ' 00:23:36.611 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:23:36.611 14:23:07 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:23:36.611 14:23:07 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.611 14:23:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:36.611 14:23:07 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:23:36.611 14:23:07 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.611 14:23:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:36.611 14:23:07 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:23:36.611 14:23:07 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:23:37.179 14:23:07 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:23:37.179 14:23:07 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:23:37.179 14:23:07 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:23:37.179 14:23:07 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.179 14:23:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:37.437 14:23:07 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:23:37.437 14:23:07 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.437 14:23:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:37.437 14:23:07 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:23:37.437 ' 00:23:38.373 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:23:38.373 14:23:08 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:23:38.373 14:23:08 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.373 14:23:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:38.373 14:23:08 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:23:38.373 14:23:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.373 14:23:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:38.373 14:23:08 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:23:38.373 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:23:38.373 ' 00:23:39.775 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:23:39.775 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:23:40.034 14:23:10 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:40.034 14:23:10 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90431 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90431 ']' 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90431 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90431 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.034 killing process with pid 90431 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90431' 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90431 00:23:40.034 14:23:10 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90431 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90431 ']' 00:23:42.566 Process with pid 90431 is not found 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90431 00:23:42.566 14:23:12 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90431 ']' 00:23:42.566 14:23:12 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90431 00:23:42.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90431) - No such process 00:23:42.566 14:23:12 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90431 is not found' 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:42.566 14:23:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:42.566 ************************************ 00:23:42.566 END TEST spdkcli_raid 00:23:42.566 ************************************ 00:23:42.566 00:23:42.566 real 0m10.239s 00:23:42.566 user 0m21.086s 00:23:42.566 sys 0m1.183s 00:23:42.566 14:23:12 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.566 14:23:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:42.566 14:23:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:42.566 14:23:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.566 14:23:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.566 14:23:12 -- common/autotest_common.sh@10 -- # set +x 00:23:42.566 ************************************ 00:23:42.566 START TEST blockdev_raid5f 00:23:42.566 ************************************ 00:23:42.566 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:42.567 * Looking for test storage... 00:23:42.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.567 14:23:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.567 --rc genhtml_branch_coverage=1 00:23:42.567 --rc genhtml_function_coverage=1 00:23:42.567 --rc genhtml_legend=1 00:23:42.567 --rc geninfo_all_blocks=1 00:23:42.567 --rc geninfo_unexecuted_blocks=1 00:23:42.567 00:23:42.567 ' 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.567 --rc genhtml_branch_coverage=1 00:23:42.567 --rc genhtml_function_coverage=1 00:23:42.567 --rc genhtml_legend=1 00:23:42.567 --rc geninfo_all_blocks=1 00:23:42.567 --rc geninfo_unexecuted_blocks=1 00:23:42.567 00:23:42.567 ' 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.567 --rc genhtml_branch_coverage=1 00:23:42.567 --rc genhtml_function_coverage=1 00:23:42.567 --rc genhtml_legend=1 00:23:42.567 --rc geninfo_all_blocks=1 00:23:42.567 --rc geninfo_unexecuted_blocks=1 00:23:42.567 00:23:42.567 ' 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.567 --rc genhtml_branch_coverage=1 00:23:42.567 --rc genhtml_function_coverage=1 00:23:42.567 --rc genhtml_legend=1 00:23:42.567 --rc geninfo_all_blocks=1 00:23:42.567 --rc geninfo_unexecuted_blocks=1 00:23:42.567 00:23:42.567 ' 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90714 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:42.567 14:23:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90714 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90714 ']' 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.567 14:23:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:42.567 [2024-11-27 14:23:13.040607] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:42.567 [2024-11-27 14:23:13.041038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90714 ] 00:23:42.826 [2024-11-27 14:23:13.228001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.084 [2024-11-27 14:23:13.362989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 Malloc0 00:23:44.018 Malloc1 00:23:44.018 Malloc2 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.018 14:23:14 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "879a7759-d4db-4192-badd-55016291993f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "879a7759-d4db-4192-badd-55016291993f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "879a7759-d4db-4192-badd-55016291993f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "71d372d2-e8d6-4099-9190-17aed38753af",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "42011ea3-3b57-4ff4-83b8-02b8ef4a8fec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "cde5bf26-fdee-49f2-a7a5-d2bacbfd9457",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:44.018 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:23:44.276 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:23:44.276 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:23:44.276 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:23:44.276 14:23:14 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90714 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90714 ']' 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90714 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90714 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.276 killing process with pid 90714 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90714' 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90714 00:23:44.276 14:23:14 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90714 00:23:46.827 14:23:17 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:46.827 14:23:17 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:46.827 14:23:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:46.827 14:23:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.827 14:23:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:46.827 ************************************ 00:23:46.827 START TEST bdev_hello_world 00:23:46.827 ************************************ 00:23:46.827 14:23:17 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:46.827 [2024-11-27 14:23:17.157081] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:46.827 [2024-11-27 14:23:17.157254] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90776 ] 00:23:47.088 [2024-11-27 14:23:17.344739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.088 [2024-11-27 14:23:17.472746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.661 [2024-11-27 14:23:18.013044] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:47.662 [2024-11-27 14:23:18.013277] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:23:47.662 [2024-11-27 14:23:18.013316] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:47.662 [2024-11-27 14:23:18.013943] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:47.662 [2024-11-27 14:23:18.014142] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:47.662 [2024-11-27 14:23:18.014171] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:47.662 [2024-11-27 14:23:18.014260] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:47.662 00:23:47.662 [2024-11-27 14:23:18.014289] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:49.038 ************************************ 00:23:49.038 END TEST bdev_hello_world 00:23:49.038 ************************************ 00:23:49.038 00:23:49.038 real 0m2.227s 00:23:49.038 user 0m1.797s 00:23:49.038 sys 0m0.309s 00:23:49.038 14:23:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.038 14:23:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:49.038 14:23:19 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:23:49.038 14:23:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:49.038 14:23:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.038 14:23:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:49.038 ************************************ 00:23:49.038 START TEST bdev_bounds 00:23:49.038 ************************************ 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:49.038 Process bdevio pid: 90818 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90818 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90818' 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90818 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90818 ']' 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.038 14:23:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:49.038 [2024-11-27 14:23:19.444145] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:49.039 [2024-11-27 14:23:19.444351] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90818 ] 00:23:49.297 [2024-11-27 14:23:19.630091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:49.297 [2024-11-27 14:23:19.764308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.297 [2024-11-27 14:23:19.764435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.297 [2024-11-27 14:23:19.764452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.233 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.233 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:50.233 14:23:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:50.233 I/O targets: 00:23:50.233 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:23:50.233 00:23:50.233 00:23:50.233 CUnit - A unit testing framework for C - Version 2.1-3 00:23:50.233 http://cunit.sourceforge.net/ 00:23:50.233 00:23:50.233 00:23:50.233 Suite: bdevio tests on: raid5f 00:23:50.233 Test: blockdev write read block ...passed 00:23:50.233 Test: blockdev write zeroes read block ...passed 00:23:50.233 Test: blockdev write zeroes read no split ...passed 00:23:50.233 Test: blockdev write zeroes read split ...passed 00:23:50.492 Test: blockdev write zeroes read split partial ...passed 00:23:50.492 Test: blockdev reset ...passed 00:23:50.492 Test: blockdev write read 8 blocks ...passed 00:23:50.492 Test: blockdev write read size > 128k ...passed 00:23:50.492 Test: blockdev write read invalid size ...passed 00:23:50.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:50.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:50.492 Test: blockdev write read max offset ...passed 00:23:50.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:50.492 Test: blockdev writev readv 8 blocks ...passed 00:23:50.492 Test: blockdev writev readv 30 x 1block ...passed 00:23:50.492 Test: blockdev writev readv block ...passed 00:23:50.492 Test: blockdev writev readv size > 128k ...passed 00:23:50.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:50.492 Test: blockdev comparev and writev ...passed 00:23:50.492 Test: blockdev nvme passthru rw ...passed 00:23:50.492 Test: blockdev nvme passthru vendor specific ...passed 00:23:50.492 Test: blockdev nvme admin passthru ...passed 00:23:50.492 Test: blockdev copy ...passed 00:23:50.492 00:23:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:23:50.492 suites 1 1 n/a 0 0 00:23:50.492 tests 23 23 23 0 0 00:23:50.492 asserts 130 130 130 0 n/a 00:23:50.492 00:23:50.492 Elapsed time = 0.574 seconds 00:23:50.492 0 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90818 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90818 ']' 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90818 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90818 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90818' 00:23:50.492 killing process with pid 90818 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90818 00:23:50.492 14:23:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90818 00:23:51.870 14:23:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:51.870 00:23:51.870 real 0m2.829s 00:23:51.870 user 0m7.053s 00:23:51.870 sys 0m0.422s 00:23:51.870 14:23:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.870 14:23:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:51.870 ************************************ 00:23:51.870 END TEST bdev_bounds 00:23:51.870 ************************************ 00:23:51.870 14:23:22 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:51.870 14:23:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:51.870 14:23:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.870 14:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:51.870 ************************************ 00:23:51.870 START TEST bdev_nbd 00:23:51.870 ************************************ 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90882 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90882 /var/tmp/spdk-nbd.sock 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90882 ']' 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:51.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.870 14:23:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:51.870 [2024-11-27 14:23:22.304550] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:23:51.870 [2024-11-27 14:23:22.304891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.129 [2024-11-27 14:23:22.479195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.129 [2024-11-27 14:23:22.597721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:53.066 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.325 1+0 records in 00:23:53.325 1+0 records out 00:23:53.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125705 s, 3.3 MB/s 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:53.325 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:53.584 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:53.584 { 00:23:53.584 "nbd_device": "/dev/nbd0", 00:23:53.584 "bdev_name": "raid5f" 00:23:53.584 } 00:23:53.584 ]' 00:23:53.584 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:53.584 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:53.584 { 00:23:53.584 "nbd_device": "/dev/nbd0", 00:23:53.584 "bdev_name": "raid5f" 00:23:53.584 } 00:23:53.584 ]' 00:23:53.584 14:23:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:53.584 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:53.843 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.411 14:23:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:23:54.670 /dev/nbd0 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:54.670 1+0 records in 00:23:54.670 1+0 records out 00:23:54.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360705 s, 11.4 MB/s 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:54.670 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:54.929 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:54.929 { 00:23:54.929 "nbd_device": "/dev/nbd0", 00:23:54.929 "bdev_name": "raid5f" 00:23:54.929 } 00:23:54.929 ]' 00:23:54.929 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:54.929 { 00:23:54.929 "nbd_device": "/dev/nbd0", 00:23:54.929 "bdev_name": "raid5f" 00:23:54.929 } 00:23:54.929 ]' 00:23:54.929 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:54.929 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:54.929 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:54.929 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:55.188 256+0 records in 00:23:55.188 256+0 records out 00:23:55.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102623 s, 102 MB/s 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:55.188 256+0 records in 00:23:55.188 256+0 records out 00:23:55.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0424372 s, 24.7 MB/s 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:55.188 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:55.448 14:23:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:56.016 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:56.275 malloc_lvol_verify 00:23:56.275 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:56.534 58f37935-20ba-45fd-b783-199b3cd5a70f 00:23:56.534 14:23:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:56.793 2703c7aa-7f03-4e97-a886-c533f472ad78 00:23:56.793 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:57.052 /dev/nbd0 00:23:57.052 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:57.052 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:57.052 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:57.052 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:57.052 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:57.052 mke2fs 1.47.0 (5-Feb-2023) 00:23:57.052 Discarding device blocks: 0/4096 done 00:23:57.052 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:57.052 00:23:57.052 Allocating group tables: 0/1 done 00:23:57.310 Writing inode tables: 0/1 done 00:23:57.310 Creating journal (1024 blocks): done 00:23:57.310 Writing superblocks and filesystem accounting information: 0/1 done 00:23:57.310 00:23:57.310 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:57.310 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:57.310 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:57.310 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:57.311 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:57.311 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:57.311 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90882 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90882 ']' 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90882 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90882 00:23:57.570 killing process with pid 90882 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90882' 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90882 00:23:57.570 14:23:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90882 00:23:59.018 ************************************ 00:23:59.018 END TEST bdev_nbd 00:23:59.018 14:23:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:59.018 00:23:59.018 real 0m7.054s 00:23:59.018 user 0m10.319s 00:23:59.018 sys 0m1.506s 00:23:59.018 14:23:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.018 14:23:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:59.018 ************************************ 00:23:59.018 14:23:29 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:59.018 14:23:29 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:23:59.018 14:23:29 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:23:59.018 14:23:29 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:59.018 14:23:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.018 14:23:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.018 14:23:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:59.018 ************************************ 00:23:59.018 START TEST bdev_fio 00:23:59.018 ************************************ 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:59.018 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:59.018 ************************************ 00:23:59.018 START TEST bdev_fio_rw_verify 00:23:59.018 ************************************ 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:59.018 14:23:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:59.275 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:59.275 fio-3.35 00:23:59.275 Starting 1 thread 00:24:11.476 00:24:11.476 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91098: Wed Nov 27 14:23:40 2024 00:24:11.476 read: IOPS=8262, BW=32.3MiB/s (33.8MB/s)(323MiB/10001msec) 00:24:11.476 slat (usec): min=24, max=614, avg=30.47, stdev= 6.69 00:24:11.476 clat (usec): min=14, max=883, avg=192.65, stdev=73.29 00:24:11.476 lat (usec): min=44, max=914, avg=223.13, stdev=74.17 00:24:11.476 clat percentiles (usec): 00:24:11.476 | 50.000th=[ 196], 99.000th=[ 330], 99.900th=[ 457], 99.990th=[ 742], 00:24:11.476 | 99.999th=[ 881] 00:24:11.476 write: IOPS=8722, BW=34.1MiB/s (35.7MB/s)(336MiB/9866msec); 0 zone resets 00:24:11.476 slat (usec): min=12, max=420, avg=23.68, stdev= 6.33 00:24:11.476 clat (usec): min=86, max=1153, avg=440.29, stdev=57.89 00:24:11.476 lat (usec): min=108, max=1178, avg=463.97, stdev=59.13 00:24:11.476 clat percentiles (usec): 00:24:11.476 | 50.000th=[ 445], 99.000th=[ 562], 99.900th=[ 750], 99.990th=[ 1123], 00:24:11.476 | 99.999th=[ 1156] 00:24:11.476 bw ( KiB/s): min=31888, max=37080, per=98.63%, avg=34412.26, stdev=1122.67, samples=19 00:24:11.476 iops : min= 7972, max= 9270, avg=8603.05, stdev=280.66, samples=19 00:24:11.476 lat (usec) : 20=0.01%, 50=0.01%, 100=5.86%, 250=30.05%, 500=58.18% 00:24:11.476 lat (usec) : 750=5.84%, 1000=0.04% 00:24:11.476 lat (msec) : 2=0.02% 00:24:11.476 cpu : usr=98.22%, sys=0.67%, ctx=38, majf=0, minf=7244 00:24:11.476 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.476 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.476 issued rwts: total=82629,86055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.477 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.477 00:24:11.477 Run status group 0 (all jobs): 00:24:11.477 READ: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=323MiB (338MB), run=10001-10001msec 00:24:11.477 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=336MiB (352MB), run=9866-9866msec 00:24:12.043 ----------------------------------------------------- 00:24:12.043 Suppressions used: 00:24:12.043 count bytes template 00:24:12.043 1 7 /usr/src/fio/parse.c 00:24:12.043 846 81216 /usr/src/fio/iolog.c 00:24:12.043 1 8 libtcmalloc_minimal.so 00:24:12.043 1 904 libcrypto.so 00:24:12.043 ----------------------------------------------------- 00:24:12.043 00:24:12.043 00:24:12.043 real 0m12.862s 00:24:12.043 user 0m13.156s 00:24:12.043 sys 0m0.851s 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:24:12.043 ************************************ 00:24:12.043 END TEST bdev_fio_rw_verify 00:24:12.043 ************************************ 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "879a7759-d4db-4192-badd-55016291993f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "879a7759-d4db-4192-badd-55016291993f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "879a7759-d4db-4192-badd-55016291993f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "71d372d2-e8d6-4099-9190-17aed38753af",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "42011ea3-3b57-4ff4-83b8-02b8ef4a8fec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "cde5bf26-fdee-49f2-a7a5-d2bacbfd9457",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:12.043 /home/vagrant/spdk_repo/spdk 00:24:12.043 ************************************ 00:24:12.043 END TEST bdev_fio 00:24:12.043 ************************************ 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:24:12.043 00:24:12.043 real 0m13.078s 00:24:12.043 user 0m13.250s 00:24:12.043 sys 0m0.949s 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.043 14:23:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:12.043 14:23:42 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:12.043 14:23:42 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:12.043 14:23:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:12.043 14:23:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.043 14:23:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:12.043 ************************************ 00:24:12.043 START TEST bdev_verify 00:24:12.043 ************************************ 00:24:12.043 14:23:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:12.303 [2024-11-27 14:23:42.554913] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:24:12.303 [2024-11-27 14:23:42.555244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91256 ] 00:24:12.303 [2024-11-27 14:23:42.741764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:12.561 [2024-11-27 14:23:42.879161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.561 [2024-11-27 14:23:42.879171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.129 Running I/O for 5 seconds... 00:24:15.000 10166.00 IOPS, 39.71 MiB/s [2024-11-27T14:23:46.447Z] 10450.50 IOPS, 40.82 MiB/s [2024-11-27T14:23:47.838Z] 11032.33 IOPS, 43.10 MiB/s [2024-11-27T14:23:48.785Z] 11099.25 IOPS, 43.36 MiB/s [2024-11-27T14:23:48.785Z] 11528.20 IOPS, 45.03 MiB/s 00:24:18.272 Latency(us) 00:24:18.272 [2024-11-27T14:23:48.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.272 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:18.272 Verification LBA range: start 0x0 length 0x2000 00:24:18.272 raid5f : 5.02 5743.47 22.44 0.00 0.00 33538.09 249.48 28240.06 00:24:18.272 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:18.272 Verification LBA range: start 0x2000 length 0x2000 00:24:18.272 raid5f : 5.01 5775.67 22.56 0.00 0.00 33321.98 247.62 28478.37 00:24:18.272 [2024-11-27T14:23:48.785Z] =================================================================================================================== 00:24:18.272 [2024-11-27T14:23:48.785Z] Total : 11519.14 45.00 0.00 0.00 33429.83 247.62 28478.37 00:24:19.649 00:24:19.649 real 0m7.309s 00:24:19.649 user 0m13.391s 00:24:19.649 sys 0m0.319s 00:24:19.649 14:23:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.649 ************************************ 00:24:19.649 END TEST bdev_verify 00:24:19.649 ************************************ 00:24:19.649 14:23:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:19.649 14:23:49 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:19.649 14:23:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:19.649 14:23:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.649 14:23:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:19.649 ************************************ 00:24:19.649 START TEST bdev_verify_big_io 00:24:19.649 ************************************ 00:24:19.649 14:23:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:19.649 [2024-11-27 14:23:49.891790] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:24:19.649 [2024-11-27 14:23:49.892002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91356 ] 00:24:19.649 [2024-11-27 14:23:50.065297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:19.908 [2024-11-27 14:23:50.200093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.908 [2024-11-27 14:23:50.200098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.475 Running I/O for 5 seconds... 00:24:22.345 506.00 IOPS, 31.62 MiB/s [2024-11-27T14:23:53.795Z] 634.00 IOPS, 39.62 MiB/s [2024-11-27T14:23:55.174Z] 676.67 IOPS, 42.29 MiB/s [2024-11-27T14:23:56.108Z] 698.00 IOPS, 43.62 MiB/s [2024-11-27T14:23:56.108Z] 710.80 IOPS, 44.42 MiB/s 00:24:25.595 Latency(us) 00:24:25.595 [2024-11-27T14:23:56.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.595 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:25.595 Verification LBA range: start 0x0 length 0x200 00:24:25.595 raid5f : 5.15 370.20 23.14 0.00 0.00 8567429.41 174.08 398458.88 00:24:25.595 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:25.595 Verification LBA range: start 0x200 length 0x200 00:24:25.595 raid5f : 5.11 372.63 23.29 0.00 0.00 8426994.97 283.00 385113.37 00:24:25.595 [2024-11-27T14:23:56.108Z] =================================================================================================================== 00:24:25.595 [2024-11-27T14:23:56.108Z] Total : 742.83 46.43 0.00 0.00 8497212.19 174.08 398458.88 00:24:26.973 00:24:26.973 real 0m7.399s 00:24:26.973 user 0m13.618s 00:24:26.973 sys 0m0.320s 00:24:26.973 14:23:57 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.973 ************************************ 00:24:26.973 14:23:57 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:26.973 END TEST bdev_verify_big_io 00:24:26.973 ************************************ 00:24:26.973 14:23:57 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:26.973 14:23:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:26.973 14:23:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.973 14:23:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.973 ************************************ 00:24:26.973 START TEST bdev_write_zeroes 00:24:26.973 ************************************ 00:24:26.973 14:23:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:26.973 [2024-11-27 14:23:57.360584] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:24:26.973 [2024-11-27 14:23:57.360771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91449 ] 00:24:27.232 [2024-11-27 14:23:57.537883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.232 [2024-11-27 14:23:57.656879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.800 Running I/O for 1 seconds... 00:24:28.736 20247.00 IOPS, 79.09 MiB/s 00:24:28.736 Latency(us) 00:24:28.736 [2024-11-27T14:23:59.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.736 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:28.736 raid5f : 1.01 20224.80 79.00 0.00 0.00 6303.01 2144.81 8817.57 00:24:28.736 [2024-11-27T14:23:59.249Z] =================================================================================================================== 00:24:28.736 [2024-11-27T14:23:59.249Z] Total : 20224.80 79.00 0.00 0.00 6303.01 2144.81 8817.57 00:24:30.111 00:24:30.111 real 0m3.280s 00:24:30.111 user 0m2.842s 00:24:30.111 sys 0m0.306s 00:24:30.111 14:24:00 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.111 14:24:00 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:30.111 ************************************ 00:24:30.111 END TEST bdev_write_zeroes 00:24:30.111 ************************************ 00:24:30.111 14:24:00 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:30.111 14:24:00 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:30.111 14:24:00 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.111 14:24:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:30.111 ************************************ 00:24:30.111 START TEST bdev_json_nonenclosed 00:24:30.111 ************************************ 00:24:30.111 14:24:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:30.369 [2024-11-27 14:24:00.686627] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:24:30.369 [2024-11-27 14:24:00.686778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91504 ] 00:24:30.369 [2024-11-27 14:24:00.870157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.627 [2024-11-27 14:24:01.031275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.627 [2024-11-27 14:24:01.031424] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:30.627 [2024-11-27 14:24:01.031473] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:30.627 [2024-11-27 14:24:01.031491] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:30.884 00:24:30.884 real 0m0.711s 00:24:30.884 user 0m0.470s 00:24:30.884 sys 0m0.135s 00:24:30.884 14:24:01 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.884 14:24:01 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:30.884 ************************************ 00:24:30.884 END TEST bdev_json_nonenclosed 00:24:30.884 ************************************ 00:24:30.884 14:24:01 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:30.884 14:24:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:30.884 14:24:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.884 14:24:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:30.884 ************************************ 00:24:30.885 START TEST bdev_json_nonarray 00:24:30.885 ************************************ 00:24:30.885 14:24:01 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:31.142 [2024-11-27 14:24:01.453341] Starting SPDK v25.01-pre git sha1 9094b9600 / DPDK 24.03.0 initialization... 00:24:31.142 [2024-11-27 14:24:01.453486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91534 ] 00:24:31.142 [2024-11-27 14:24:01.627753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.400 [2024-11-27 14:24:01.762314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.400 [2024-11-27 14:24:01.762439] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:31.400 [2024-11-27 14:24:01.762469] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:31.400 [2024-11-27 14:24:01.762498] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:31.658 00:24:31.658 real 0m0.670s 00:24:31.658 user 0m0.435s 00:24:31.658 sys 0m0.129s 00:24:31.658 14:24:02 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.658 14:24:02 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:31.658 ************************************ 00:24:31.658 END TEST bdev_json_nonarray 00:24:31.659 ************************************ 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:24:31.659 14:24:02 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:24:31.659 00:24:31.659 real 0m49.359s 00:24:31.659 user 1m7.613s 00:24:31.659 sys 0m5.378s 00:24:31.659 14:24:02 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.659 14:24:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:31.659 ************************************ 00:24:31.659 END TEST blockdev_raid5f 00:24:31.659 ************************************ 00:24:31.659 14:24:02 -- spdk/autotest.sh@194 -- # uname -s 00:24:31.659 14:24:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:31.659 14:24:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:31.659 14:24:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:31.659 14:24:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:31.659 14:24:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.659 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:24:31.659 14:24:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:31.659 14:24:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:31.917 14:24:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:31.917 14:24:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:31.917 14:24:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:31.917 14:24:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:31.917 14:24:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:31.917 14:24:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:31.917 14:24:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:31.917 14:24:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.917 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:24:31.917 14:24:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:31.917 14:24:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:31.917 14:24:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:31.917 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:24:33.820 INFO: APP EXITING 00:24:33.820 INFO: killing all VMs 00:24:33.820 INFO: killing vhost app 00:24:33.820 INFO: EXIT DONE 00:24:33.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.820 Waiting for block devices as requested 00:24:33.820 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:33.820 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:34.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:34.752 Cleaning 00:24:34.752 Removing: /var/run/dpdk/spdk0/config 00:24:34.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:34.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:34.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:34.752 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:34.752 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:34.752 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:34.752 Removing: /dev/shm/spdk_tgt_trace.pid56965 00:24:34.752 Removing: /var/run/dpdk/spdk0 00:24:34.752 Removing: /var/run/dpdk/spdk_pid56724 00:24:34.752 Removing: /var/run/dpdk/spdk_pid56965 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57194 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57298 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57354 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57482 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57501 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57710 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57827 00:24:34.752 Removing: /var/run/dpdk/spdk_pid57934 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58056 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58164 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58208 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58240 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58316 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58428 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58899 00:24:34.752 Removing: /var/run/dpdk/spdk_pid58974 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59050 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59066 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59215 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59237 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59385 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59406 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59477 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59495 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59559 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59588 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59783 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59820 00:24:34.752 Removing: /var/run/dpdk/spdk_pid59905 00:24:34.752 Removing: /var/run/dpdk/spdk_pid61298 00:24:34.752 Removing: /var/run/dpdk/spdk_pid61514 00:24:34.752 Removing: /var/run/dpdk/spdk_pid61656 00:24:34.752 Removing: /var/run/dpdk/spdk_pid62310 00:24:34.752 Removing: /var/run/dpdk/spdk_pid62522 00:24:34.752 Removing: /var/run/dpdk/spdk_pid62673 00:24:34.752 Removing: /var/run/dpdk/spdk_pid63334 00:24:34.752 Removing: /var/run/dpdk/spdk_pid63670 00:24:34.752 Removing: /var/run/dpdk/spdk_pid63821 00:24:34.752 Removing: /var/run/dpdk/spdk_pid65245 00:24:34.752 Removing: /var/run/dpdk/spdk_pid65498 00:24:34.752 Removing: /var/run/dpdk/spdk_pid65644 00:24:34.752 Removing: /var/run/dpdk/spdk_pid67072 00:24:34.752 Removing: /var/run/dpdk/spdk_pid67331 00:24:34.752 Removing: /var/run/dpdk/spdk_pid67471 00:24:34.752 Removing: /var/run/dpdk/spdk_pid68885 00:24:34.752 Removing: /var/run/dpdk/spdk_pid69342 00:24:34.752 Removing: /var/run/dpdk/spdk_pid69493 00:24:34.752 Removing: /var/run/dpdk/spdk_pid71016 00:24:34.752 Removing: /var/run/dpdk/spdk_pid71287 00:24:34.752 Removing: /var/run/dpdk/spdk_pid71433 00:24:34.752 Removing: /var/run/dpdk/spdk_pid72946 00:24:34.752 Removing: /var/run/dpdk/spdk_pid73211 00:24:35.011 Removing: /var/run/dpdk/spdk_pid73362 00:24:35.011 Removing: /var/run/dpdk/spdk_pid74875 00:24:35.011 Removing: /var/run/dpdk/spdk_pid75368 00:24:35.011 Removing: /var/run/dpdk/spdk_pid75518 00:24:35.011 Removing: /var/run/dpdk/spdk_pid75663 00:24:35.011 Removing: /var/run/dpdk/spdk_pid76120 00:24:35.011 Removing: /var/run/dpdk/spdk_pid76883 00:24:35.011 Removing: /var/run/dpdk/spdk_pid77284 00:24:35.011 Removing: /var/run/dpdk/spdk_pid78003 00:24:35.011 Removing: /var/run/dpdk/spdk_pid78484 00:24:35.011 Removing: /var/run/dpdk/spdk_pid79288 00:24:35.011 Removing: /var/run/dpdk/spdk_pid79710 00:24:35.011 Removing: /var/run/dpdk/spdk_pid81727 00:24:35.011 Removing: /var/run/dpdk/spdk_pid82180 00:24:35.011 Removing: /var/run/dpdk/spdk_pid82630 00:24:35.011 Removing: /var/run/dpdk/spdk_pid84763 00:24:35.011 Removing: /var/run/dpdk/spdk_pid85254 00:24:35.011 Removing: /var/run/dpdk/spdk_pid85759 00:24:35.011 Removing: /var/run/dpdk/spdk_pid86840 00:24:35.011 Removing: /var/run/dpdk/spdk_pid87174 00:24:35.011 Removing: /var/run/dpdk/spdk_pid88131 00:24:35.011 Removing: /var/run/dpdk/spdk_pid88465 00:24:35.011 Removing: /var/run/dpdk/spdk_pid89416 00:24:35.011 Removing: /var/run/dpdk/spdk_pid89750 00:24:35.011 Removing: /var/run/dpdk/spdk_pid90431 00:24:35.011 Removing: /var/run/dpdk/spdk_pid90714 00:24:35.011 Removing: /var/run/dpdk/spdk_pid90776 00:24:35.011 Removing: /var/run/dpdk/spdk_pid90818 00:24:35.011 Removing: /var/run/dpdk/spdk_pid91086 00:24:35.011 Removing: /var/run/dpdk/spdk_pid91256 00:24:35.011 Removing: /var/run/dpdk/spdk_pid91356 00:24:35.011 Removing: /var/run/dpdk/spdk_pid91449 00:24:35.011 Removing: /var/run/dpdk/spdk_pid91504 00:24:35.011 Removing: /var/run/dpdk/spdk_pid91534 00:24:35.011 Clean 00:24:35.011 14:24:05 -- common/autotest_common.sh@1453 -- # return 0 00:24:35.011 14:24:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:35.011 14:24:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.011 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.011 14:24:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:35.011 14:24:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.011 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.011 14:24:05 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:35.011 14:24:05 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:35.011 14:24:05 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:35.011 14:24:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:35.011 14:24:05 -- spdk/autotest.sh@398 -- # hostname 00:24:35.011 14:24:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:35.269 geninfo: WARNING: invalid characters removed from testname! 00:25:01.823 14:24:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:05.109 14:24:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:07.675 14:24:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:10.205 14:24:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:12.735 14:24:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:16.016 14:24:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:18.545 14:24:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:18.545 14:24:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:18.545 14:24:48 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:18.545 14:24:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:18.545 14:24:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:18.545 14:24:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:18.545 + [[ -n 5267 ]] 00:25:18.545 + sudo kill 5267 00:25:18.553 [Pipeline] } 00:25:18.569 [Pipeline] // timeout 00:25:18.574 [Pipeline] } 00:25:18.589 [Pipeline] // stage 00:25:18.596 [Pipeline] } 00:25:18.612 [Pipeline] // catchError 00:25:18.623 [Pipeline] stage 00:25:18.625 [Pipeline] { (Stop VM) 00:25:18.641 [Pipeline] sh 00:25:18.924 + vagrant halt 00:25:22.366 ==> default: Halting domain... 00:25:27.644 [Pipeline] sh 00:25:27.924 + vagrant destroy -f 00:25:31.215 ==> default: Removing domain... 00:25:31.253 [Pipeline] sh 00:25:31.571 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:25:31.579 [Pipeline] } 00:25:31.594 [Pipeline] // stage 00:25:31.600 [Pipeline] } 00:25:31.615 [Pipeline] // dir 00:25:31.622 [Pipeline] } 00:25:31.637 [Pipeline] // wrap 00:25:31.644 [Pipeline] } 00:25:31.657 [Pipeline] // catchError 00:25:31.670 [Pipeline] stage 00:25:31.672 [Pipeline] { (Epilogue) 00:25:31.688 [Pipeline] sh 00:25:31.970 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:38.544 [Pipeline] catchError 00:25:38.546 [Pipeline] { 00:25:38.558 [Pipeline] sh 00:25:38.835 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:39.093 Artifacts sizes are good 00:25:39.101 [Pipeline] } 00:25:39.117 [Pipeline] // catchError 00:25:39.130 [Pipeline] archiveArtifacts 00:25:39.136 Archiving artifacts 00:25:39.239 [Pipeline] cleanWs 00:25:39.254 [WS-CLEANUP] Deleting project workspace... 00:25:39.254 [WS-CLEANUP] Deferred wipeout is used... 00:25:39.260 [WS-CLEANUP] done 00:25:39.262 [Pipeline] } 00:25:39.279 [Pipeline] // stage 00:25:39.285 [Pipeline] } 00:25:39.298 [Pipeline] // node 00:25:39.303 [Pipeline] End of Pipeline 00:25:39.341 Finished: SUCCESS